Firstly, I'm still very new to the J2ee space, so please bear that in mind.
I was wondering how people solve, what I believe to be, a common clustering/caching issue. That is, having one gigantic cache (i.e J2EE server & Entity beans) often means a lot of thrashing if different customers do different things at different times. I.E the 2nd customer comes along and all the 1st customers data is in cache and so the cache has to expire/load a lot of information to deal with the 2nd customer.
We have an application along typical lines, CustomerBean->many->TaskBean etc etc. If the number of customers is large, and the number of Tasks obviously exponentially larger still, how do you design a system that will scale well and not thrash too much. I realise that adding more memory is the first thing to do to minimse the chance of customers data purging other customers data, but that doesn't seem like a real long term view to me.
The other problem is that you really want to take advantage of in-vm calls if you can. Clustering is damn useful for availablity etc, but doesn't that reduce the chances of a particular set of entity beans in question residing in-vm of the request? (i.e requiring a remote call)
What I thought might be possible is to "locate" customer specific EJB data on specific servers and somehow "sticky" a user's session to a particular server. (so All of Customer X's Entity beans located on Server K, and you guarantee after login that the session is stuck to Server K).
There appears to be advantages to this (& disadvantages I know, see below):
* Can use cheaper equipment to host for a specific customer (ie only have a single customer per box). No need for 1TB server memory to host everything.
* Sticking to a particular server benefits from in-vm calls (that's where the data is!)
* High availability is sacrificed - Server K goes down, Customer X can't login or do anything (at least it's isolated to a specific customer(s) )though
Anyway, can anyone comment on the general nature of the problem and now it is solved? I figure you eventually run out of $/hardware capacity with a single server if you want in-vm calls. Perhaps there's another way I'm completely missing.
Would love anyone's thoughts, links to good articles regarding this sort of thing.
just a very generic idea: you can try to only replicate to one or n dedicated backup nodes for a given node X. When X crashes, one of the backup nodes takes over and makes another node its backup. So for example, for session repl, you don't replicate everything to everyone, but just to 1 backup node. This increases scalability.
Another solution is to break your state into substates and replicate them to individual backup nodes. Kind of like striping in a RAID. This is more difficult, and requires some logical partitions in your state.
Thanks for the reply Bela,
Not a bad idea, thanks I'll think more about this approach.
As I've thought more about the problem, I realise that having a Sticky session to a box probably solves a lot of the locality problems.
Let's say you have 4 nodes in the cluster. If I can guarantee that each node in the cluster can store 1/4 of all customer data, shouldn't sticky-ing a client to a particular box result in that box using LRU to remove older data to handle this new client, and slowly build up a cache for that client session.
The worst-case part of this is that several clients from the same customer get sticky-ed to different boxes, polluting the cache with more of their data.
Is there some hook in the JBoss code that would allow a notification style approach to pre-load a set of customer data into the entity cache? So this bean/object gets notified when a client logs in and gets stuck to Server X, that Server X intelligently ensures that appropriate customer data is cached and made available to this cluster?
I'm not so concerened with complete fail over, ie. that if server failure, customer transparently moves over to another box. We would be happy to require them to login again and get stuck to another box.
Does anyone have some good links for Linux sticky-session style technology? (hardware/software).
Thanks again for your post.
> As I've thought more about the problem, I realise
> that having a Sticky session to a box probably solves
> a lot of the locality problems.
Yes - this approach requires sticky sessions. It would work without, but then we have to potentially redirect requests for data X if a node doesn't have X, to another node that does. This would work, but we would 'warm up' more nodes with data that we don't actually want.
> Let's say you have 4 nodes in the cluster. If I can
> guarantee that each node in the cluster can store 1/4
> of all customer data, shouldn't sticky-ing a client
> to a particular box result in that box using LRU to
> remove older data to handle this new client, and
> slowly build up a cache for that client session.
> The worst-case part of this is that several clients
> from the same customer get sticky-ed to different
> boxes, polluting the cache with more of their data
Yes. The load balancer should try its best to avoid this. I understand if you have multiple frames, then each requests might potentially get sent to a different node initially.
> Is there some hook in the JBoss code that would allow
> a notification style approach to pre-load a set of
> customer data into the entity cache?
There's not one, but multiple entry points, and I'm not sure if we make them public. For once, there is session replication. Then we have stateful session bean repl, also we have the entity bean cache (with cache invalidation).
My comments were geared towards HTTP session replication.