1 Reply Latest reply on Jan 21, 2013 7:05 AM by akoledzhikov

    Strange evictions when using Infinispan as a 2nd level cache for Hibernate

    akoledzhikov

      Hi everyone,

      I'm using JBoss 7.1.1 and I've configured Infinispan 5.1.2 as a second level cache for Hibernate. I have only 2 instances of an entity class and cache configured for 100 entries for it. Yet, when I repeatedly try to find those 2 by id, I see sql code being executed (I've also set the hibernate option that shows it to true). So, it appears that one of the entities is kicking the other out when being put into the cache, and vice versa. This is somehow weird, since there should be 98 more free slots before eviction occurs.

      After some investigation, I've found out that both entities get assigned to the same segment of the BoundedConcurrentHashMap which is used in this cache. I don't know why this happens - maybe they have similar hash codes for their PKs? Unfortunately, each segment in this map has a capacity of 1 (since it is calculated as max_number_of_entries/concurrencyLevel, but not less than 1), and so, eviction occurs.

      I'm thinking of solving the problem by setting the concurrency level for this cache to 1 (so all entries are in 1 segment with 100 capacity). Is this a good idea, providing that those entities are mostly-for-read (updated in very, very rare circumstances)?

      Also, what setting should I provide to my persistence.xml to set it? I've tried several options, but all failed. I think <property name="hibernate.cache.infinispan.entity.locking.concurrency_level" value="1"/> was my best shot, but it still failed

       

      Any help will be greatly appreciated!

        • 1. Re: Strange evictions when using Infinispan as a 2nd level cache for Hibernate
          akoledzhikov

          Ok, it appears that the jboss7/hibernate/infinispan integration is somewhat incomplete, and only a selected few properties provided in persistence.xml are read by the default region factory. Unfortunately, changing to jndi/normal RF leads to awkward NPEs during cache creation. So, my workaround was to create an additional cache-container in the infinispan's subsystem configuration and set the desired properties there. Not the best approach, I gues, but whatever rocks your boat ...