1 of 1 people found this helpful
I'm not sure if the grid filesystem is a good fit for you. Perhaps what you need is eviction + a cache store, + organise the cache such that you map keys to path names. E.g.,
A setup like that, along with using LIRS for eviction and a fast cache store impl, should work well for you.
Thank for your reply. As with above suggestion, we have tried a quite similar usage using [MemCache] + [FileSystem(ext3)] and we start to face lot of invalidation problem when we try to scale into multiple server (not too high end one). So, then, we see if we could put that 2 components into a single solution as [Cache + Store], we try Redis, since it can persist the cache to file store (so no need for invalidation) , but then, it requires us to put the entire data set or the best it can offer is to allow only entire key set to be put into Memory. If we have large key set (ie. large number of files), it is still not feasible and efficient since not all key set are hot key (ie.not all files are accessed frequently). Then, as current, we take a look at Infinispan, it provides us with a persisted cache store (filecachestore) like Redis but it also allows us to offload Cold items into filesystem via [Eviction + Filecachestore] but unlike Redis, the offloaded Cold Item is not gone/expired forever, it can be loaded back into Memory and become Hot Item again depends on the Memory policy eg.MaxEntries= , all handled automatically by Infinispan. I have tried the Server Clustering and Memory federation, Infinispan is simply marvelous and elegant !
So, Infinispan is already fit our requirement by using [Eviction + FileCacheStore ] but then we have to design some additional file & metadata storage method . Then , I stumble upon the Infinispan's GridFileSystem where it can store metadata as replication cache and file data as distribution cache, all run in a cluster and it is GOOD and seem to fit us. So without reinventing any wheel, we are investigating and testing whether GridFileSystem really fit us, since we are dealing with file and metadata after all. GridFileSystem seem like fit us naturally.
As highlighted by a Gridfilesystem tutorial at http://community.jboss.org/wiki/GridFileSystem
Cache<String,byte> data = cacheManager.getCache(“distributed”);
Cache<String,GridFile.Metadata> metadata = cacheManager.getCache(“replicated”);
GridFilesystem fs = new GridFilesystem(data, metadata);
Of course, we can use GridFileSystem only as In-Memory file store. But, why don't we use GridFileSystem as In-Memory backed by FileCacheStore ? for the above code, if the metadata (replicated) and data(distributed) caches is backed by FileCacheStore and together with eviction, can I say it is already Work like previous usage senarios as [Eviction+FileCacheStore] ? So, instead of just KV cache-store, now GridFileSystem can offer us a higher level File cache-store and together with the future DIST features, it is simply unbeatable.
Hi Manik, do you have any idea where can I get the full configuration definition of the "distributed" and "replicated" cache definition of the above GridFileSystem tutorial code so that I could test it out?
Danny, you can have a replicated cache for metadata and distributed cache for data independent of using grid file system.
Whether GridFS suits your use case or not depends on what kind of API you want to expose to your clients. If you want them to view it as a filesystem, it'd be correct. If you want them to see as a k/v data source, use the normal cache API.
Wrt the configuration, there's no specific configuration definition for it.If you take all.xml shipped in Infinispan distros (http://anonsvn.jboss.org/repos/infinispan/tags/4.1.0.FINAL/core/src/main/resources/config-samples/all.xml), you can use the 'distributedCache' for the distributed one. You can use the default cache configuration in that file for the replicated one. To access the default config, you'd simply call cacheManager.getCache();
Hi Galder, Thank you for your reply and pointing out related to the GridFS. Shall give it a implementation trial. Thank.