Skip navigation



To show how git can be used to patch binary installations I wrote up a small program that contains a piece of 'logic' and a configurable message at wolfc/junk-patch-dist · GitHub.


Using the junk-patch repository I created 4 binary distributions:



The 4 distributions I put into the wolfc/junk-patch-dist · GitHub repository.


Note that the 1.1 merge commit was made using all one-off patches together:

git merge --no-commit -s ours p1 p2

Care should be taken that all one-off patches are actually merged together.

The Normal Installation


Now I mimic what might have happened during the original installation:

  1. Unzip
  2. Modify conf/patch/logic/
  3. Overwrite logic.jar with the one from p1 (support patch)


Effectively I'm now at 1.0-p1 with my own configuration, which I can verify with:

$ java -jar main.jar Mucker


Release 1.1


Now I would like to do an in-place upgrade to 1.1. Within the "dist-1.0-SNAPSHOT" directory I should do the following:

$ git init
$ git add .
$ git commit

This puts my entire installation in the master branch.


Now sync it with 1.0-p1

$ git remote add dist
$ git fetch dist
$ git rebase dist/p1

Observe the conflict via:

$ git diff

This should show the change you made in the previous chapter.


Now the tricky bit is to resolve the conflict using your favourite conflict resolving tools (e.g. vim).


After that continue:

$ git add conf/patconf/patch/logic/
$ git rebase --continue


You should now be on the track from version 1.0 through one-off patch #1 into your own setup.


The final step is rebasing it onto version 1.1:

$ git rebase 1.1


Just to show what will happen (and to make it an interesting exercise) version 1.1 contains an update of the configuration file as well. So resolve that conflict and you are on version 1.1.




This is just a small program, but I reckon it should also be doable with an EAP (/ WildFly / AS) installation.


Reminisce Devoxx 2012

Posted by wolfc Dec 5, 2012

It has been almost one month since Devoxx 2012. I think that is a good time to do an inventory of what I recall of Devoxx. It would also serve as a pointer for other people that I may have forgotten something important.


I saw Dan and a lot of others working hard on Hackergarten Overdrive! They made some very good progress. Ivan made the astute observation that it was mostly mentors hanging around and not so much developers. Maybe it was because of the open agenda? Because the last presentation of Devoxx given by Adam Bien was essentially a hacker marathon in itself and very well attended. Although there we missed out on the real hacker interaction. What if we set forth with an agenda and thus attract specific interested parties?


As for myself we got a small hacker fest going with a customer to upgrade from AS 7.1.1 to EAP 6.0.0. For an application itself the upgrade is a breezer. Just drop the application onto EAP 6.0.0 and you are done. But in this instance we also looked at configuration changes.


The first plan was to migrate these changes into the new config file, but that would have taken us a couple of hours. So we went for plan B:

  1. install EAP 6.0.0
  2. copy config file & application over from AS 7.1.1 to EAP 6.0.0
  3. boot it up
  4. have beer


Lo and behold, it booted up perfectly on the first go. So the upgrade from AS 7.1.1 to EAP 6.0.0 was done in 10 minutes and the last 20% of this plan took more than 80% of our time (of course).

Goodbye JBoss AS

The question of why is finally boiling down.

But I'll reiterate it one more time: a lot of people say we are running on JBoss or we support JBoss. While they actually mean, we are running on JBoss AS and we support JBoss AS. JBoss is the brand name and not a piece of software. Hence we are renaming the piece of software as AS by itself did not pass legal checks.


What will the new name be? Personally I like Petasos as a refence to both a (red) hat and god of middleware.


Anyway the voting is closed. I do not know any of the results, so don't ask me. Just like you all I have to wait for the announcement to come.

JSR and Expert Groups

I like the discussion that happened in the EG BOF. In terms of how we can further enhance the process to allow community visibility and involvement.


One important issue that was raised is the lack of a source control system for both the spec itself and the API. Now the burden of keeping spec and JSR website up to date is fully on the spec lead. With a good SCM this burden could easily be shared. Picture filing pull requests to the spec. Hopefully each EG will setup a SCM to tackle this soon.


It also raised the issue that API classes are currently coded with vendor specific implementations. By having the SCM open for everybody we could actually code out vendor neutral API classes. Personally I don't care about the licensing issues that would pop up, that's a byproduct of the world we live in.

Java 8, Lambda & parallelization

And of course there was a lot on Java 8.


I think we'll need more for parallelization then what was presented. Especially to factor into existing applications.

But then again I missed some sessions, so maybe it was covered in there.

As AS 7 has proven paralizing stuff properly can make a huge difference performance wise.

OSGi and modularity

I would call this: the upcoming maintenance hell. Having multiple versions of components to maintain without any guard of restricting such version usage can explode the maintenance burden of a system.


Running multiple versions should only be considered as an escape or if there is a real feature requirement. An example feature is running JBoss Data Grid on top of Enterprise Application Platform. JDG uses a feature enhanced Infinispan (with different configuration) as opposed to the one used in EAP clustering. So by isolation it JDG has no impact on the cluster functionality of EAP.

Actual XSD validation is a bit more harder, then regular XML file validation. Au contraire to popular believe

SchemaFactory schemaFactory = SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI);
schemaFactory.setErrorHandler(new ErrorHandlerImpl());
schemaFactory.setResourceResolver(new XMLResourceResolver());

does not validate a schema. It'll quietly create a schema object even if the schema itself is invalid.


To really validate the schema, you really must use a validator. So

DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
DocumentBuilder parser = factory.newDocumentBuilder();
Document document = parser.parse(xsd);

SchemaFactory schemaFactory = SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI);
schemaFactory.setErrorHandler(new ErrorHandlerImpl());
schemaFactory.setResourceResolver(new XMLResourceResolver());
//Schema schema = schemaFactory.newSchema(new URL(""));
// make sure we do not use any internet resources
Schema schema = schemaFactory.newSchema(resource("schema/XMLSchema.xsd"));
Validator validator = schema.newValidator();
validator.validate(new DOMSource(document));

must be done to do the trick.


Actually it is exactly the same as XML file validation.

We had an interesting discussion at FOSDEM about collaboring on a Free and Open Source Software project like JBoss AS 7. To really allow FOSS developers to participate you need to make sure only FOSS build tools and dependencies are used. Anything else and you would raise the bar for your contributors.


An interesting point was the usage of a non-FOSS issue tracker. Should you use an issue tracker which does have free access, but is not FOSS?

Personally I would say: why not? As long as you control and own the data it should pose no real threat. Just make sure the acquired license does allow for an unlimited number of free access.


Using Fedora 17 makes sure your build tools, dependencies and even transitive dependencies are FOSS. But does it lower the bar for contributors?

In actuallity no, because the build tool used (mvn-rpmbuild) and the component set available (latest and greatest only) makes for a different result than upstream. But this does not only go for Fedora (and its mvn-rpmbuild), it goes for every Maven (or other build tool) project that draws in dependencies.

Looking at building Maven 3.0.3 using Maven 3.0.3 it would download and use maven-artifact 2.0 to 2.0.9, while at runtime it only uses maven-artifact 3.0.3 by its own definition.


Java developers give little thought about the runtime platform where their component is going to run. Regardless of the use of Maven, Gradle or Ant / Ivy. Or running on RHEL, Fedora, Debian or Windows. It will create issues once you want to integrate on a well-defined component set.


So we should really give more thought into where our components are going to run, be it Windows, OS X, Debian, RHEL or Fedora.

And we should try to foster an environment in which all these realities can collaborate.

Thanks to the effort of Marek Goldmann we will have JBoss AS 7 on Fedora.


On Fedora there are two requirements which makes this an interesting challenge:

  1. Everything must be Free and Open Source Software
  2. Only the latest and greatest of each component is available


Because of interesting issues it means that Hibernate Core 4, Resteasy and even JBoss VFS are not readily available on Fedora.


Initially we'll have a limited subset of features available, but we really want to get the blazingly fast modular exceptionally lightweight fully compliant easily testable JBoss Application Server 7 to be available to all Fedora users.


I'll talk about JBoss AS 7 and this ongoing rollercoaster ride at Fosdem 2012.

Don't forget to stop by, or better: get on this ride!


The customer is king

Posted by wolfc Dec 16, 2011

With EJBs we have made the bean developer all powerful. But who is the actual consumer of these beans, not the bean developer. It is the client calling the bean. I think we went the wrong way.


So I made a proposal to reverse this trend for @Asynchronous, see Asynchronous *Client* Invocation Semantics. On which Arjan Tijms came up with an interesting addition:

This interface can then be specified at the injection point:


@EJB @Asynchronous(ClientInterface.class)
ClientInterface bean; // interface methods are asynchronous and proxy to actual proxy


Then I went on and filed for a change to allow setting the transaction timeout of a request.


David Blevins not only envisioned it in the same style as we have it currently on JBoss AS, but also brought back the point of having the client as a central focus:

Using your example, imagine how cool this would be:


@Stateless [edit: any managed bean] class FooBean {
  @EJB OtherBean other;

  public void doSomething() {


@Stateless class OtherBean {
  public void doSomethingElse() {
     assert currentTransactionTimeout() == 30;


Now musing on these on a Friday afternoon. It might actually be possible to realizable both without the need for JSR-308.


@Stateless class OtherBean {
   public void doSomethingElse() {
      assert currentTransactionTimeout() == 30;

interface ClientInterface {
   Future<Void> doSomethingElse();

Future<Void> result = ClientInvocationContext.invoke(bean).with(ClientInterface.class).doSomethingElse();


Albeit not as pretty as it would be with type annotations, it should make the customer rule again.

More and more we want to do asynchronous invocations. This allows us to get better performance because of concurrency, just take a look at the speed of AS 7.


But what happens when an asynchronous invocation fails. Most times the exception itself will tell us why it failed. But the real trouble might be in the caller code, not in the asynchronous task itself.

java.lang.Exception: throw up
    at org.jboss.beach.util.concurrent.SimpleExceptionTestCase$
    at org.jboss.beach.util.concurrent.SimpleExceptionTestCase$
    at java.util.concurrent.FutureTask$Sync.innerRun(
    at java.util.concurrent.ThreadPoolExecutor.runWorker(
    at java.util.concurrent.ThreadPoolExecutor$

The stack trace is in this case useless. Elvis has left the building, address unknown.


Then I spotted a gem in Remoting 3 by David M. Lloyd: glueing stacktraces. By glueing the stacktrace at the right moment we know where the call originated from. So I wrote up a MarkedExecutorService which can wrap any other ExecutorService to provide this functionality.

java.lang.Exception: throw up
    at org.jboss.beach.util.concurrent.SimpleExceptionTestCase$
    at org.jboss.beach.util.concurrent.SimpleExceptionTestCase$
    at java.util.concurrent.FutureTask$Sync.innerRun(
    at java.util.concurrent.ThreadPoolExecutor.runWorker(
    at java.util.concurrent.ThreadPoolExecutor$
    at ...asynchronous invocation...(Unknown Source)
    at org.jboss.beach.util.concurrent.SimpleExceptionTestCase.testMarkedException(


Now we can easily see what instigated the throwing up. And thus cleaning up the mess can be done professionaly.


See for the source code of MarkedExecutorService.

In EJB 3.1 asynchronous client invocation semantics were introduced [1]. As the term already implies it addresses a clients need, however the specification has the construct such that is has become a bean developer concern. I think this is wrong.


Let me give a simple example. Suppose we would have developed some EJB. Now I want to build an application which wants to do an asynchronous request. In EJB 3.1 I would have to change the EJB to accomodate such a feature. Rather I would just want to call asynchronous.


So I started to fiddle some time ago to see if I can get a simple API with which asynchronous becomes a client concern. Leaving the aquiring of an EJB out of scope I can up with the following code [2]:

import static java.util.concurrent.TimeUnit.SECONDS;
import static org.jboss.beach.async.Async.async;
import static org.jboss.beach.async.Async.divine;

public class SimpleUnitTest
   public void test1() throws Exception
      SomeView bean = new SomeBean();
      Future<Integer> future = divine(async(bean).getNumber());
      int result = future.get(5, SECONDS);
      assertEquals(1, result);

   public void test2() throws Exception
      CyclicBarrier barrier = new CyclicBarrier(3);

      SomeView bean = new SomeBean();


      barrier.await(5, SECONDS);


So effectively I've got it boiled down to 3 methods:

public class Async
    public static <T> T async(T bean);
    public static Future<?> divine();

    public static <R> Future<R> divine(R dummyResult);


With the async method a regular proxy is transformed into an asynchronous proxy. Every method on it will be available for asynchronous client invocation.

The divine methods return the Future from the latest asynchronous invocation.


An assumption that also need to be mentioned, but is out of scope:

The caller environment has administrative operations to setup and control the asynchronous invocations. This is important so that asynchronous invocations within an application server can't overload it.


I'll put this proposal to the EJB 3.2 EG today to see gauge this. I'm equally interested in your reactions as well, so comment (or flame ;-) ) away.


[1] EJB 3.1 FR 4.5 Asynchronous Methods


With the installment of AS 7 we have a perfect runtime implementation for a modular environment in the form of jboss-modules.


Now I also want it to be usable from within an IDE (in this case Intellij) and Maven (our current build tool). So I figured that through the use of a JUnit Runner I could setup a moduler enviroment (similar to Arquillian).


First you'll have to pickup a piece of experimental code I call jboss-modules-junit at


Now you can start to run your test case within a modular environment simply by saying @RunWith(Modules.class).

public class MyTestCase {


By default Modules will take the test class name as the module name. This can be overriden with @ModuleName.


First you'll need a module.xml file to represent your test module. By default the modules runner will look it up on your class path entry of your test class.

<?xml version="1.0" encoding="UTF-8"?><module xmlns="urn:jboss:module:1.0" name="org.jboss.modules.junit.SimpleTestCase">
        <!-- a trick to add the original class path entry -->
        <resource-root path="../../../../../.."/>

        <module name="junit"/>
        <module name="org.jboss.logmanager"/>


Because you share JUnit with whoever is running the test (IDE or build tool) you need to define a junit module as such. Currently this is a manual requirement.

<?xml version="1.0" encoding="UTF-8"?><module xmlns="urn:jboss:module:1.0" name="junit">
        <module name="system" export="false">
                    <path name="junit/framework"/>
                    <path name="org/junit"/>


So I bring it all together on our main project, AS 7, You can try it for yourself by using Additional VM parameters are required though:

  • -Djboss.home.dir=/home/carlo/work/jboss-as/build/target/jboss-7.0.0.Beta4-SNAPSHOT (mandatory)
  • -Dmodule.path=target/test-classes/modules:/home/carlo/work/jboss-as/build/target/jboss-7.0.0.Beta4-SNAPSHOT/modules (override module repos)
  • -Djava.util.logging.manager=org.jboss.logmanager.LogManager (because of Intellij bug, see below)
  • -Djboss.embedded.root=target/temp-config (to instruct AS Embedded to make a copy)


Some caveats though:

  1. Intellij contains a bug which initializes a JUnit test case class too soon ( fixed in 108.13)
  2. Maven and Intellij disagree in which directory to start the test, so you have to explicitly specify $MODULE_DIR$ as the Working directory in the Run Configuration.

Last Saturday the "Fools' Day" release went out the door.


Among a long list of changes, I would like to pick out a couple of issues which I personally find a huge improvment. Before you click off into the list, the short story is that a deployment plan which does not contain all bits will be rolled back. This means that an Arquillian deployment will no longer stall when it has 'missing' or broken bits (JBAS-9077). It also has become the default mode of operation (JBAS-9146). And with a bit of magic it works (JBAS-9082).


To me the rollback feature is a huge boon in usability. If I make a mistake (and I make many ) the application server will provide me guidance. So this "Fools' Day" release has made this fool very happy.


Grab your copy here and tell me what you think.

In this blog I'll outline four strategies that allow separation of concern with your Java code. I'll assume that Maven is used as the build tool.


For more information on SoC, please read


The four strategies are done through:

  1. Spagetti
  2. Packages
  3. Modules
  4. Components

Randomly place classes in packages

While not being an actual strategy I mention it because of strategy number two. If you ask developers whether they allow this kind of strategy they will vehemently deny using it.



  • blatantly easy
  • randomly test bits and pieces (if at all :-) )


  • I hope you like spaghetti (bolognese)

Have one concern per package

This strategy is mostly subscribed to. Every developer wants to build clean packages that capture one concern per package. Be it a feature package or an integration package.


The thing is that we don't have real tools that guard such architectural concerns. We could use something like or but usually it's kept to the developer.


Ultimately this leads to this strategy actually ending up the same as strategy number one. (Trust me, the deadline is always looming.)



  • relatively easy
  • test concerns on a package level
  • almost doesn't need an IDE (vi can be enough )


  • needs vigilant developers

Have one concern per module

Let's define a module as a Maven module. We have a single project containing multiple modules. The single project will provide a component that fulfills multiple functions and integrations.


With this strategy we actually have Maven guarding the bounderies of each module (as long as we don't fiddle freely with the dependencies).



  • automatic guarding of separation, clean packages
  • allows an easy big bang integration test
  • a single codebase for multiple concerns


  • needs proper integration (is that really a con :-) )
  • no more Q&D shortcuts before the deadline

Have one concern per component


  • automatic guarding of separation
  • isolated development, testing and documenting
  • re-usable in different scenarios


  • release management
  • bottom-up integration testing is needed
  • multiple codebases where multiple concerns are spanned


Now most of these cons stem from the fact that there are no tools available to help us out. We don't have a toolbox that says: component development integration tools. Thus releasing many components becomes a choir. Testing each commit on a component doesn't reach up the component integration tree, again we need to invtervene manually. And lastly we loose the overview of when many components are involved and we don't like it when we see something that does not fit our brain.

As an experiment I've created a Mixin utility class which allows for multiple inheritance. Well actually it does what the class is named create a mixin based on multiple interfaces.


I've no real use case for it, it's a bit of brain rot that I just wanted committed somewhere.


So here is a piece of code that makes use of the Mixin:

public class MixinTestCase
   public static interface A
      String getA();
   public static interface B
      String getB();
   public void test1()
      A a = new A() {
         public String getA()
            return "A";
      B b = new B() {
         public String getB()
            return "B";
      ClassLoader loader = Thread.currentThread().getContextClassLoader();
      Object mixin = Mixin.newProxyInstance(loader, a, b);
      assertEquals("A", ((A) mixin).getA());
      assertEquals("B", ((B) mixin).getB());


The full code can be found here:

Filter Blog

By date:
By tag: