it's a hybrid solution. If you don't enable any CacheStore, it will store it all in the Java Heap. This is the fastest solution, provided you can tune your JVM for it and the expected size (per VM) is not outrgeously large.
The current provided filesystem based CacheStore implementation is not very fast, so you can either look into the Cassandra CacheStore for the longer term storage or look into some of the alternatives being proposed by several community patches.
Working on an off-heap storage is an interesting discussion subject, but until now it has been hard to proof there is any benefit in it as any off-heap solution has many downsides as well.. experiments very welcome!
Thanks to clarify about about this. I asked that because I've found the presentation below that reports Infinispan as caching framework implementating that strategy.
Franlky I don't have any experience about Heap-off memory, but I've read many people reporting benefits since it reduces the burden to be managed by the GC when handling big chunks of data.
Right, I'm aware of the cool reasearch going on with Apache DirectMemory; it would be nice to run some experiments on having a "Direct-Store" as a second level storage, but I'm skeptical about using it for the main (hot) entries.
Garbage Collection might be hard to tune for some larger heaps, still even writing your own is very complex as you have to fight memory fragmentation and make use of optimal pages; these are complex problems which the JVM handles very nicely and people often forget about the problem you might have with "C code" with large heaps too. Also, invoking native code from the JVM is not going to help achieve high performance figures... I'd be glad to be proven wrong but until I can measure its benefits I'm considering the "good PR" some off heap storages are receiving just as PR stunts, at least for the general purpose case.
I'm sure that DirectMemory for example can be very useful if you have reusable chunks of byte buffers, for example if your application consistently deals with constant sized data, and you are actually able to write a buffer pool whose management doesn't slow you down too much, as that would need to be yet another contention point.