1 of 1 people found this helpful
If you have the same name for the application you might confuse the server.
The EJB naming is based on <app name>/<module name>... if the app is found local the server ignore other.
If you are looking for examples you might have a look to this quickstart
Aha, hidden away in a branch, that's why I didn't see that quickstart before
The app-name hack was meant to trick JBoss into thinking the ears are the same application, so that I can deploy only the war on the small nodes (the master node only has the full ear (which also includes the war, though), not the slim one) and let JBoss wire it up to the ejb-jar on the master node. Remember, this is still a single application; I'm just looking to scale that one war horizontally. There are more reasons than I can count on both of my hands why the rest of the app is unsuitable for clustering. A descriptive term I just made up would be heterogeneous clustered deployment: different deployments on different nodes that still belong to the same and behave as a singular logical application.
But what I'm getting from this, is I have to treat this as two applications and do everything manually, from setting up remote-ejb-connection, using regular remoting with InitialContext or @EJB(lookup=...), to manual cache handling (possibly with InfiniSpan backend); is that right?
What I did not understand is the reason to split the application. Remote invocation will be a performance decrease, to cluster the application I would prefer to use the application deployed on several nodes and let do the persistence the sync. You might use optimistic locking or pesimistic (row-lock in the DB).
The best for performance is optimistic and most applcations can use that approach as it is very seldom to access the same data concurrent.
I'll try to sum it up as best I can without spoiling details. My app is a monolith that has grown out of control. There are several continuous processes responsible for monitoring, communicationg with and controlling files, other software and other servers. These are designed to run in one place, and they are not very cpu-intensive. You might think the right approach is to extract these components and put them in their own server, but the dependencies between them and the rest of the app are so convoluted and intertwined that it's hard to see where to even start cutting.
So I figured I'd start cutting in the other end, where the pressure is. We have a tiny external api for pulling data from the rest of the system and generating a url based on several factors in the request, and this api is the only part that really gets any load. Most of the processing (including some external communication) can be performed based on cached data without ever hitting the database, and the parts that do rely on the db are operating on rarely changing data. This is what the new war I'm talking about is. It's the only performance-hungry component, and the only component that really needs to scale.
Your last reply made me reevaluate the first approach, though. If what I gain from horisontal scaling is lost in remoting, I'll just have to bite the bullet, go back to the drawing board, and try to do the opposite: to extract the non-distributable parts.
What do you think? (Sorry, this is starting to get a little off topic from the category this is posted in)