Our rule is thumb is to not set the heap larger than the amount of RAM on the machine. Note that the Sun JVM will never use all of the heap allocated due to its algorithm for determining when to do a major collection, so even in a 1GB system setting the heap to 1GB still leaves enough free RAM available to run the OS and the basic services.
Also, don't forget to set the young generation to 1/3 or 1/4 the size of the heap.
Finally, don't allocate a larger heap than what you really need. If you have a 64bit machine with 32GB of RAM, setting the heap to 32GB might seem like a good thing to do, until you hit your first major collection and realize that it is taking several minutes.
I'm a bit confused by your answer, that if you have 1 gig of ram on a box it's ok to set the heap to 1 gig. Through reading your other postings I know that you have coded your own JVM based on Sun's source, so you know your stuff. And I know that you know the heap is just the new and tenured gen, not the whole JVM. Hopefully you can clear something up for me.
We run JBoss on Solaris. I have a basic formula that I use for my JVM memory calculations which is:
Total JVM memory footprint = JVM housekeeping + heap + permgen
In the formula above:
* JVM housekeeping (this is my own stupid term) = native code + JVM runtime data (code cache, etc.).
* heap = new gen + tenured gen
* permgen = permgen
When I run a prstat on the JBoss pid, what I see for SIZE is Total JVM mem footprint. I can do my little calculations with stats that I get from jmap -heap and come up with consistent sizes for JVM housekeeping across our boxes, depending on the particular deployment (and other things like # of threads).
In our case, we pre-allocate (set min=max) for heap and permgen.
I haven't yet found a option specifically for setting memory for the native code or runtime data, I don't know if anyone would want to.
Anyway (there really is a question buried here :), if you have a 1 gig box, preallocate 1 gig for heap (new and tunured), preallocate 128 or whatever for perm, knowing that the native code and runtime data might want 200 or 300 meg, aren't you going to be living in swap city?
I'm going to have to test this out. I've been leaving 500 meg free on our Solaris boxes for the OS and the monitoring stuff the network guys run.
Actually, if you allocate 1GB of heap, it will never all be used. For example, with a young generation of 300MB, once the heap size reaches 700MB, on the next GC a full GC will be done. Why? Because the Sun JVM makes the pessimistic assumption that all objects in the young generation will survive and thus overflow into the tenured generation, and in this case the tenured generation would overflow, therefore a full GC is performed. Therefore, 300MB of the allocated heap are never used. So that 300MB does not count towards the memory allocation; if it is swapped out, who cares? Actually, while the OS might reserve that memory, it might not ever allocate it, though it will count that memory towards total memory allocation.
By the way, I verified this behavior in the 1.4.2 JVM, and I think that it is still the same in 1.5, not sure about 1.6 though.
Another, probably more accurate estimate, would be to bring up the system and see how much memory is taken by services and the OS. Then allocate a heap size to what is left over. For example, when running FC 6 on my laptop with 2GB ram, I am using about 300MB memory for the OS and services. Thus the maximum JVM heap I would want to use is 1.7GB (2GB - 300MB). But like all tuning settings, this is simply an estimate for an initial size. One then has to take various measurements under load, and analyze those results, to find the optimal settings.
Very interesting... I'll experiment with the premise that if a certain percentage of the heap is swapped, it's OK, because it will not be used anyway.