-
1. Re: Preload second level cache
brian.stansberry Jan 8, 2010 8:48 AM (in response to johan.kumps)I take it populating the cache by reading the data from the db is not an option? Coordinating this across the cluster using something like the JBoss AS HASingleton or a clustered Quartz job?
The reasons use of a CacheLoader is considered incorrect in the JPA/Hibernate use case are:
1) What's the point of adding another moving part when there's already another persistent store of the data? Well, if you've got a use case you really can't solve a better way, this reason is somewhat negated.
2) Writing to the CacheLoader every time the data changes is expensive. But if it never changes, you only bear this cost the first time you populate the cache.
3) You've now got two copies of the data that can fall out of sync, e.g. you shut the cluster down and while it's down someone changes the database. Restart the cluster and you won't see those changes in your app. If you don't use a shared cache loader, this problem is much worse; you've got n + 1 copies of the data, where n is cluster size.
-
2. Re: Preload second level cache
johan.kumps Jan 8, 2010 9:20 AM (in response to brian.stansberry)Thanks for your answer Brian.
Preloading the cache by reading the data from the db using the finders is an option but we are wrestling with the following. We should make sure that the preload is only started on one node in the second level cache cluster. We are talking about approximately 160000 cache entries.
The whole architecture will be deployed on WebSpere. Different WebSphere cluster nodes from different WS clusters will be joining the same second level cache cluster.
Any thoughts
Johan,
-
3. Re: Preload second level cache
brian.stansberry Jan 8, 2010 10:23 AM (in response to johan.kumps)I'm not an expert on WebSphere clustering, so I can't comment on facilities they provide to run a job once across a set of clusters.
Perhaps a clustered Quartz job? The job is stored in a db, so if all the servers have access to the db, it can be coordinated that way.