Hi there - a lot of in-house testing is currently ongoing. Here are some results, pushed up to 48 nodes - http://infinispan.blogspot.com/search/label/benchmarks - but this is on Infinispan 4.0. 4.1 has got significantly faster too. Due to resourcing constraints, we've only just had the time to pick up on the benchmarking from before. Expect to see more results posted.
In terms of usage, I know of a few large-scale apps designing their systems around 400 - 600 node Infinispan clusters, if that helps you. We will be encouraging them to write up white papers on the subject as their systems go live.
Do Infinispan has some way to manage the huge memory for JVM heap for large scale deployments? This is an area where users of JBossCache or Infinispan can get stuck.
Like for terracotta : http://www.theserverside.com/news/thread.tss?thread_id=60893
The distributed mode is a reply for this. Still, do we have some GC recommendations?
I'm actually working on swapping out 2x32 Gb memcached servers with Infinispan and planning to use CMS (concurrent mark sweep). It works like a charm for my Java tomcat servers running 12 Gb heaps and have only seen the sporadic Full GC and when I do, it's about 4-8 seconds and not 1-2 minutes as described in the article. Keep in mind, I'm not running a financial trading system or mission critical application.
CMS works well with apps that are not very CPU heavy as it uses parts of CPU to improve GC. However, if you're app is very CPU centric, you can instead use Infinispan as a server, for example a Hot Rod server, and tune each JVM accordingly.
For the 1000 node test, what modifications did you have to make to the jgroups.xml file? Did you test replication, distribution, or both on the 1000 nodes?