We have round about 200 client instances of a product. Right now, we are using in memory cache.
We want to configure Infinispan cluster for them. Each instance's data will be stored in its respective distribute cache, so we'll have 200 caches and Each will store caches for a particular instance..
On an average, lets assume each instance's cache takes around 2 GB of memory (subjected to db entries). We would like to distribute our cache to say 20 places, so we are thinking of keeping owners="20" and segments will be 32 * cluster size
We also want to keep a single server group and all servers would belong to that group. We would also like to set up eviction based on memory, we would like to evict when 80% of memory is full.
We want our DC to be highly available. We chose the static discovery option. Lets say we will keep 5 DC. One will be main and the rest 4 are just Backup DCs.
Lets assume we have no issues in keeping multiple nodes for a particular HC machine.
Since, server process runs in a separate JVM process than the Host Controller Process. So, I can conclude that the server processes are constrained by available (user) memory.
I think if I set max-size inside server group element, I would be able to provide the maximum memory, and I guess its per server node,
<heap size="64m" max-size="512m"/>
Given the above details, What should be the ideal size of cluster?
What is the ideal way? One HC -> One Server node or One HC -> 2 - 5 Server nodes. We would like to treat all our server nodes equal in terms of memory
What should be the number of server nodes in a particular HC machine ?