Hi Manik,
I was looking at the benchmark graphs that have been posted
http://2.bp.blogspot.com/_ca0W9t-Ryos/S4O322x5vEI/AAAAAAAAA_Q/C6V6jM_BxEM/s1600-h/infinispan_GET.png
and
http://1.bp.blogspot.com/_ca0W9t-Ryos/S4O36SCiOZI/AAAAAAAAA_Y/hw3TDXsTxrc/s1600-h/infinispan_PUT.png
At 4 nodes the graphs show the following:
Dist-sync
GET operations = 500,000 ops/ sec (125,000 ops/sec/node)
PUT operations = 66,000 ops/ sec (16,500 ops/sec/node)
Dist-sync-lazy
GET operations = 168,000 ops/sec (42,000 ops/sec/node)
PUT operations = 72,800 ops/sec (18,200 ops/sec/node)
Now I want to scale the cluster nodes (increase the number of nodes) to support more transactions per second.
But, before I can even think of scaling, in order to achieve a similar performance (equivalent to what I had at 4 nodes cluster), I need the following configuration:
Dist-sync
GET operations with 50 Nodes = 500,000 ops/ sec (10000 ops/ sec/node)
PUT operations with 7 Nodes = 66,500 ops/ sec ( the graph is varying and averaging out at 9,500 ops/sec/node)
Dist-sync-lazy
GET operations with 10 Nodes = 170,000 ops/ sec (17000 ops/ sec/node)
PUT operations with 8 Nodes = 76,000 ops/ sec ( the graph is varying and averaging out at 9,500 ops/sec/node)
There is a huge gap for GET operations (before I can get a better performance than what is available at 4 nodes).
It may not be correct to state that the cache is scalable for the number of transactions.
Is my understanding correct?
Thanks,
Kapil