NPE on Hibernate second-level cache invalidation
draganb Apr 6, 2015 8:48 AMHello,
We are experiencing a strange behavior that is preventing us from using Hibernate L2 cache with Infinispan - occasionally we start getting NPEs during invalidation of the second-level cache entries and we have to restart the cluster in order to bring things back to normal.
We are running a cluster of two instances of JBOSS EAP 6.2.0.GA on two separate physical machines.
We are using default configuration for Hibernate L2 cache defined in Infinispan subsystem in standalone-full-ha.xml (displayed in config.xml attachment).
In attachments are also persistence.xml and stack trace of the exception that occured on the node that initiated transaction (the node that sent invalidation commands to other nodes).
The problematic entity (CampaignConfig) has a composite primary key (SessionFactory name is explicitly set as can be seen in persistence.xml, and everything works fine most of the time).
On the node that is invalidated, the following stack trace is printed (and causes the entire XA transaction to rollback):
09:50:14,655 WARN [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (OOB-20,shared=udp) Problems invoking command SingleRpcCommand{cacheName='CampaignConfig', command=InvalidateCommand{keys=[CampaignConfig#CampaignConfigId@2e6c, CampaignConfig#CampaignConfigId@2e8b]}}: java.lang.NullPointerException
at org.infinispan.remoting.InboundInvocationHandlerImpl.handleWithWaitForBlocks(InboundInvocationHandlerImpl.java:119)
at org.infinispan.remoting.InboundInvocationHandlerImpl.handle(InboundInvocationHandlerImpl.java:86)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.executeCommandFromLocalCluster(CommandAwareRpcDispatcher.java:247)
at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.handle(CommandAwareRpcDispatcher.java:220)
at org.jgroups.blocks.RequestCorrelator.handleRequest(RequestCorrelator.java:484)
at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:391)
at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:249)
at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:600)
at org.jgroups.blocks.mux.MuxUpHandler.up(MuxUpHandler.java:130)
at org.jgroups.JChannel.up(JChannel.java:707)
at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1025)
at org.jgroups.protocols.RSVP.up(RSVP.java:188)
at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
at org.jgroups.protocols.FlowControl.up(FlowControl.java:400)
at org.jgroups.protocols.pbcast.GMS.up(GMS.java:896)
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:245)
at org.jgroups.protocols.UNICAST2.handleDataReceived(UNICAST2.java:765)
at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:420)
at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:645)
at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:147)
at org.jgroups.protocols.FD.up(FD.java:253)
at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288)
at org.jgroups.protocols.MERGE3.up(MERGE3.java:290)
at org.jgroups.protocols.Discovery.up(Discovery.java:359)
at org.jgroups.protocols.TP$ProtocolAdapter.up(TP.java:2607)
at org.jgroups.protocols.TP.passMessageUp(TP.java:1260)
at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1822)
at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1795)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) [rt.jar:1.7.0_09]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) [rt.jar:1.7.0_09]
at java.lang.Thread.run(Thread.java:722) [rt.jar:1.7.0_09]
I suspect we have misconfigured something and I appreciate any help around this.
Thanks and regards,
Dragan
-
persistence.xml.zip 793 bytes
-
config.xml.zip 670 bytes
-
TxInitiator.txt.zip 3.1 KB