We've been considering a few different options for Infinispan architectures. A little background on where we are now...
We are running 2 cache instances in replicated mode. Each instance resides within our application. However, we do this for failover. So while one instance is active, the other is passive.
That being said, we have a couple goals in mind....
- Remove the Coupling between the Application and the Cache Instance
- Remove the Coupling between the Cache Instances (e.g. merge / state transfer)
- Reduce the Latency for Replicated Operations
The idea is that we will add 2 more cache instances. These instances will be independent of our application process. In addition, we put each application / standalone cache pair on the same machine. This should allow us to reduce our latency by replicating to a cache on the same machine.
The problem we have with this approach is that the two application processes are still coupled (cache 1 & cache 2) and as a result may affect each other (e.g. merge / state transfer).
The benefit is that the application processes can both fail, but the cache will remain intact via the separate Infinispan instances.
Hot Rod Multi Cluster Architecture
The only difference here is that we have substituted a single 4-node cluster for two 2-node clusters using Hot Rod.
We still have the same problem with the two application processes being coupled and we still have the benefit that the cache will remain intact in the event that both application processes fail.
To be honest, I'm not sure what the benefit of using Hot Rod is. We still have the same pros and cons, but now we have a much more complex architecture.
Hot Rod Architecture
This is our preferred approach in that the application processes are no longer coupled, and the cache will remain intact in the event that both application processes fail.
However, it appears that Hot Rod does not yet support the functionality we need. Right now, it won't take long for the two caches to become inconsistent since Hot Rod does NOT 'push' operations to the remote caches.
It seems that the remote cache is designed to operate as an L1 cache with a very aggressive eviction policy. I assume that ISPN-374 (Async Hot Rod Events) will allow you to use the remote cache as a proper L1 cache without the need to set up an eviction policy. However, we will take a performance hit on failover since the other cache will be empty.
What we would like to do is build on ISPN-374 such that rather than sending only the key with the event, we'll send the value too. This will allows us to propagate values (for inserts / updates) rather than just keys (for removes).
Question: When will ISPN-374 be ready? What are you thoughts on us extending on it so that we can propagate values and not just keys (for invalidation)? The idea is that once ISPN-374 is ready, I will code the extension myself.
Finally, is there anything I've missed here or are there any other suggestions?