0 Replies Latest reply on Jan 17, 2019 6:53 PM by david01337

    Infinispan: Why are my EntryProcessor's being serialized in 9.3? Why weren't they in 8.2?

    david01337

      In our application, we were using infinispan 8.2 and started running our app in a clustered environment, beginning testing with 2 nodes.

      This appeared to work ok and we updated to WF 14 and tried using infinispan 9.3 and began getting org.infinispan.commons.marshall.NotSerializableException. We thought something we were trying to cache may not have been serializable, but after digging down found that it was actually the org.infinispan.jcache.embedded.functions.Invoke object getting serialized which included an instance of our javax.cache.processor.EntryProcessor implementation.

       

      First off, why is the entry processor serialized at all?

      Obviously, the node receiving the serialized entry processor must have the class definition locally (to deserialize it), so why don't we just create an instance on the receiving side and invoke using the locally created instance plus serialized arguments?

       

      Assuming we're actually going to run the entry processor on the receiving node, why would infinispan do this? Why not just communicate the K, V, R and the receiver just uses those directly? Why on earth would we need to re-run the (potentially) expensive logic in the processor? This probably just speaks to my ignorance and narrow view of infinispan due to our use, but it doesn't make sense.

       

      Finally, what changed and why between infinispan 8.2 such that I COULD run clustered having my EntryProcessors non-serializable, and could NOT run clustered without my entry processors being serialized in 9.3?

       

      For background, this is causing a problem because inside our entry processors we need to do some jndi lookups and for whatever reason we're not finding them in an InitialContext on the receiving node and I'm not sure why. I'm just surprised we're re-doing entry processor logic / work at all and want to understand the motivation and whether there is a configuration option to NOT do the work again on another node.