-
1. Re: Replication Failover - When to activate backup
ronsen Jun 7, 2011 9:38 AM (in response to clebert.suconic)Due to the fact that JBoss actually brings a lot of this features already, should this be implemented (as it is with hornetQ currently) apart from this features as for example JGroups, Infinispan etc?
Btw.: I'd really aprreciate the replication feature
-
2. Re: Replication Failover - When to activate backup
clebert.suconic Jun 7, 2011 3:41 PM (in response to ronsen)I will definitely look at how this is done on AS / JGroups and Infinispan. However a message system is different as you can't afford a split brain on a message system.
-
3. Re: Replication Failover - When to activate backup
ronsen Jun 7, 2011 4:41 PM (in response to clebert.suconic)true true, just thought it could be easier and faster to implement with the aid of such "frameworks" dunno, if its actually planned to use such for later hornetq-jboss implementations?
Just let me know if you want to test something.
-
4. Re: Replication Failover - When to activate backup
timfox Jun 8, 2011 5:17 AM (in response to ronsen)How to activate backup was outlined in the original long post I made way back, when I first outlined the new HA / clustering.
Take a look at the stuff about quorums
-
5. Re: Replication Failover - When to activate backup
ronsen Jun 8, 2011 5:25 AM (in response to timfox)I checked your article, but as far as i see it its not exactly whats meant here.
The replication possibility for the message queue to the total cluster gives totally different scnearios than just having a backup to which nodes can fail-over. Correct me if i understand anything wrong.
-
6. Re: Replication Failover - When to activate backup
timfox Jun 8, 2011 5:30 AM (in response to ronsen)http://community.jboss.org/thread/152610?start=0&tstart=0
Clebert was asking about when the backup should activated, this is outlined in the above article.
"
When using shared nothing we also need to protect from "split-brain". This can be done by requiring a quorum of nodes in the cluster for it to continue to operate.
Each cluster-connection can be configured with a <quorum-size/> config element which determines the minimum number of nodes required for the cluster to continue to operate.
If a the replicating connection from the live to the backup dies, the backup node will detect this. This could occur because of live node failure or because of some temporary network failure (e.g. network partition).
To distinguish between the two, the backup node will detect the connection failure and then ping each member of the cluster it knows about. If it receives a pong from at least <quorum-size/> other nodes within a timeout, then it will assume the live node has indeed died and it can take over as live. Otherwise it will remain as a backup and assume there is a temporary network partition, and continue trying to reconnect to the live node. Once the partition has been fixed, the backup will reconnect to the live and perform the resync protocol once more than resume normal backup operations."
-
7. Re: Replication Failover - When to activate backup
ronsen Jun 8, 2011 10:11 AM (in response to timfox)Thanks for clarifying
-
8. Re: Replication Failover - When to activate backup
timfox Jun 9, 2011 4:11 AM (in response to ronsen)Now for actually sending requests to a quorum of peers to see if they are available, yes you could probably use JGroups for that.
-
9. Re: Replication Failover - When to activate backup
ronsen Jun 9, 2011 7:56 AM (in response to timfox)Would have been nice if JGroups has been used in HornetQ instead of an own implementation
For a neatless integration of a replication, it would have been nice to have an implementation that uses e.g. cached queues with invalidation e.g. with infinispan, to implement the total replication with HornetQ but i think thats a bit to far for now and probably not the intention.
-
10. Re: Replication Failover - When to activate backup
timfox Jun 9, 2011 1:50 PM (in response to ronsen)Total replication to in a messaging system is absolutely not what you want to do. It simply doesn't scale, since every node has to handle every action that hits any other node.
That's why we didn't do it that way.
-
11. Re: Replication Failover - When to activate backup
timfox Jun 9, 2011 2:13 PM (in response to timfox)Also if you use something like virtual synchrony to total order a set of replicas you effectively single thread everything (state changes are applied on each node with a single thread. So not only does each node have to handle the sum of everything else that happens on every other node in the cluster, it has to do it with a single thread, and consequently just use a single core on each server.
So it really doesn't scale.
-
12. Re: Replication Failover - When to activate backup
ronsen Jun 9, 2011 3:44 PM (in response to timfox)Thats totally common that such strategies do scale very bad. They scale, but the scalability is very limited. It's the same with the former JBoss cache.
What about implementing it the same way distribution in infinispan is implemented with remote calls?
-
13. Re: Replication Failover - When to activate backup
timfox Jun 10, 2011 5:44 AM (in response to ronsen)Ron K wrote:
Thats totally common that such strategies do scale very bad. They scale, but the scalability is very limited.
Actually, they don't scale (horizontally) at all. If you think about, for total replication of every node, every node handles the traffic of every other node, so adding more nodes in the cluster doesn't give you the ability to handle any more traffic overall. To scale a fully replicated cluster you need to scale it *vertically* (i.e. add more RAM and CPU to each node)
What replication gives you is *availability* not scalability.
What about implementing it the same way distribution in infinispan is implemented with remote calls?
Infinispan, and other distributed caches, are designed primarily to store map like data (key, value) pairs. They're not so easy to use with ordered queue like data. It can be done (you can implemented a list in a map), but it would very slow to store and retrieve data.
-
14. Re: Replication Failover - When to activate backup
ronsen Jun 14, 2011 11:26 AM (in response to timfox)Tim Fox schrieb:
Ron K wrote:
Thats totally common that such strategies do scale very bad. They scale, but the scalability is very limited.
Actually, they don't scale (horizontally) at all. If you think about, for total replication of every node, every node handles the traffic of every other node, so adding more nodes in the cluster doesn't give you the ability to handle any more traffic overall. To scale a fully replicated cluster you need to scale it *vertically* (i.e. add more RAM and CPU to each node)
What replication gives you is *availability* not scalability.
What about implementing it the same way distribution in infinispan is implemented with remote calls?
Infinispan, and other distributed caches, are designed primarily to store map like data (key, value) pairs. They're not so easy to use with ordered queue like data. It can be done (you can implemented a list in a map), but it would very slow to store and retrieve data.
I agree with you ;-) just wanted to mention that the scalability goes until a certain limited level of e.g. a fictive number of 10 - 12 nodes but thats not real "scalability" you are right.
But sometimes the failure tolerance has to be higher than just having one backup node and therefor such strategies could be the matter of choice.
What about having the possibility to have more than one backup node in a hirarchical order?