8 Replies Latest reply on Aug 24, 2018 3:17 AM by suikast42

    Handling with DAOCacheStore

    suikast42

      I have successfully  setup infinispan 9.3 in Wildfly 11 in embedded mode with a custom cache store that access my entity dao. so far so good everything working well.

      But I thre are some clarification points for my understanding of chaching with infinispan.

       

      1.

      The part of wirte through implementation

       

          @Override
          public void write(MarshalledEntry<? extends K, ? extends V> entry) {
              final byte[] bytes = toBytesFromWarrped(entry.getValue());
              try {
                  Entitiy o = (Entitiy )ctx.getMarshaller().objectFromByteBuffer(bytes);
                   o =   ServiceUtil.getEntityDaoLocal().makePersistent( o);
                   ctx.getCache().replace(o.getName(),o) ;
              } catch (Exception e) {
                  throw new PersistenceException(e);
              }
          }
      

       

      Ist this way ok to update the cache entry on this way? The reason for the replacement is that the ID of the new entity is set by jpa provider after it passes the entitymanager. For that reason I assume, JPACacheStore from infinispan does not allows @GeneratedValues for Id fields.

       

      2. Is there a way to lock cache entries for read access like a database write lock?

       

      3. What is the infinispan approch to start with warm caches. ( Do an inital fill if the cache created for the first time )

        • 1. Re: Handling with DAOCacheStore
          william.burns

          1. I would not recommend updating the cache in such a way. Depending on how this is configured, you could cause a deadlock, since the primary owner could be holding the lock while the store is updating on a backup. Then when the replace is fired it would again try to acquire the lock on the primary, but be unable to.

          2. Sure, but you need to use pessimistic transactions to do this. You can use the FORCE_WRITE_LOCK flag while invoking a get to lock the key in the current transaction. It is briefly talked about at Infinispan 9.3 User Guide

          3. The preload flag on the store as detailed in Infinispan 9.3 User Guide

          1 of 1 people found this helpful
          • 2. Re: Handling with DAOCacheStore
            suikast42

            Thanks for the clarifications.

             

            But there still something unclear for me.

             

            1. I will use our existing hibernate/jpa dao services for read and write throgh store. So after I save the entry over my dao service the actual entry reference is changed because of the merge from the entity manager.

             

            So I need to replace the entry. Is there a clear solution?

             

            2. I find that out today, that the reader must set a lock to

             

            3. I recognize that flag in the docu but I derived my custom store from the advanced store and bot from the custom store. Which override I need for initial loading?

            • 3. Re: Handling with DAOCacheStore
              william.burns

              1. Have you tried using the functional commands in the ConcurrentMap interface, such as merge? You could also try out our functional API as well.

              3. You have to implement AdvancedCacheLoader or one of its subclasses. The various SPI classes are better described at Infinispan 9.3 User Guide. In this case implementing this method will make it so that your loader will participate in preload and other bulk operations (eg. size and streams). The interface AdvancedLoadWriteStore implements both advanced loader and writer.

              • 4. Re: Handling with DAOCacheStore
                william.burns

                Also to note if you are using Infinispan 9.3 the AdvancedCacheLoader interface has some new improvements. We support using new publishEntries/publishKeys methods instead of the old `process` method. You can still use process if you would like, but the new ones perform better.

                 

                If you need help implementing a Publisher, we internally use rxjava2, but you can use any of the reactive streams implementations that are available, such as rxjava2, akka streams and reactor to name a few. You can take a look at the existing implementations we have in Infinispan to help.

                 

                Single file store

                infinispan/SingleFileStore.java at master · infinispan/infinispan · GitHub

                 

                Here is our JPA store implementation one btw:

                infinispan/JpaStore.java at master · infinispan/infinispan · GitHub

                • 5. Re: Handling with DAOCacheStore
                  suikast42

                  Thank u well for the suggestions.

                   

                  I will check this out after my holiday  

                  • 6. Re: Handling with DAOCacheStore
                    suikast42

                    So here I am again back fresh from my holdiay.

                     

                    I checked out your suggestions but that does not fullfil my needs.

                     

                    I think it's better for clarifiy my usecase for the cache at first.

                     

                    I have an existing JavaEE7 project and I must improve the peformance of that backend. For that pupose I have done the flowing things.

                    1. Query optimisations

                    2. Improve queris for level 1 hits

                    3. Improve queries for level2 query and entity cahce.

                     

                    And know I have frequently changed Data so that I can't deal with them over hibernate level2 caching.

                    So my idea is caching this data  (in a single local cahce with jta ) with infinispan caches so that a cahce loads all the pertinent data at startime and after that writes only against a jpa datastore. So that only inserts and updates occur on the the database no selects.

                    Finally the replacment of the jpa queirs with cache queris should be the afterburner of my optimisation. 

                     

                    The problem with the existing jpa datasource ist that the model is using the @GeneratedValue approach.

                        @Id
                        @SequenceGenerator(name = "hibernate_sequence", allocationSize = 50)
                        @GeneratedValue(strategy = GenerationType.SEQUENCE)
                        @Column(name = "ID")
                        private Long id;

                     

                    The JTA stuff works like a charm ( I have configured the cahce in NON_XA mode ).

                    But I have problem to set the Id after write in JPA store. If I don't set the Id then the next put on the cache will be fail, beacause that's everytime a new entity for the entitymanager.

                     

                    As I can see the suggested JPAStore can't handle this. Line 116

                     

                         if (idJavaType.isAnnotationPresent(GeneratedValue.class)) {
                             throw new JpaStoreException(
                                   "Entity class has one identifier, but it must not have @GeneratedValue annotation");
                          }

                     

                     

                    I checked out the process or / processKeys and processEntries methods but they are triggered only at cache strarttime an not on every cache operation. Am I wrong ?

                     

                    I hope u can say me a proper way to handle this gap.

                    • 7. Re: Handling with DAOCacheStore
                      rvansa

                      Hi, your use case makes sense, though you're going through paths unbeaten. I've created [ISPN-9454] Support @GeneratedValue in JpaStore - JBoss Issue Tracker  to address this. But I don't know when that could be worked on - contributions are welcome.

                      • 8. Re: Handling with DAOCacheStore
                        suikast42

                        Thank u for your asistance.

                         

                        That's my currenty approach:

                         

                        1. I Create an application scoped CDI bean which warm up the cache on postcontruct.

                        2. I don't use the cache as an injectionpoint but rather over my service only.

                        3. After perist I flush the entiymanager and do the cache opreation.

                        4. I disable the write and read trhough approach. That's not working with thatpatern.

                         

                        That fullfil my needs for the first time.