No, what you mean is *passivation*: when the cache is full, elements are evicted and save to a store. When accessed again, they're fetched from the store (removed from it) and placed into memory again.
CacheLoaderChaining is a different beast. What it provides is the ability to attach a cost to a store, and access the stores in order of cost; lest costly first, most costly last. Example: we can define 2 cache loaders in a chain, a ClusteredCacheLoader and then a shared JDBCCacheLoader. When an element is not found, the ClusteredCacheLoader tries to fetch it from the cluster (assuming a network round trip is faster than JDBC access), maybe a neighbor node has it. If found, we return. If not found, we go on and try to fetch the element from the DB via JDBCCacheLoader.
So the point here is that we always try to avoid having to go to the DB, and as long as 1 member in the cluster has the element, we will always fetch from the cluster, not from the DB.
This could be taken even further by interposing a TcpDelegatingCacheLoader in front of the JDBCCacheLoader, so after accessing the ClusteredCacheLoader, and before going to the DB, we go to a remote (central) TCP server, which sits in front of the database, acting as a central cache for the DB. Note that if this guy fails, we can still go to the DB directly.
When composing these sorts of hierarchical caches, only your imagination is the limit ! :-)
Have a look at http://wiki.jboss.org/wiki/Wiki.jsp?page=JBossCacheCacheLoaders - which initially describes the new configuration elements for cache loaders in JBoss Cache 1.3.0, and then talks about the architecture of cache loaders in 1.3.0, including the ability to chain cache loaders.