1 2 3 Previous Next 43 Replies Latest reply on Feb 21, 2015 4:00 PM by Paul Ferraro

    Getting lots of exception related to infinispan while load testing

    Rishikesh Darandale Newbie

      I have configured the WF 8.0.0.Final in domain mode with two server nodes and mod_cluster as well. I have used default settings for infinispan and undertow, but while doing a load testing I am observing a lots error related to infinispan cache. Can anybody verify and suggest to avoid those?

       

      One more thing, why I am getting an exception related to DummyTranscation, Is infinispan is using DummyTranscationManager?

       

      ERROR [org.infinispan.transaction.tm.DummyTransaction] (default task-10) ISPN000111: afterCompletion() failed for SynchronizationAdapter{localTransaction=LocalTransaction{remoteLockedNodes=null, isMarkedForRollback=false, lockedKeys=null, backupKeyLocks=null, topologyId=2, isFromStateTransfer=false} org.infinispan.transaction.synchronization.SyncLocalTransaction@222c} org.infinispan.transaction.synchronization.SynchronizationAdapter@224b: org.infinispan.commons.CacheException: Could not commit.

       

        ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (default task-11) ISPN000136: Execution error: org.infinispan.commons.CacheException: java.lang.RuntimeException: Failure to marshal argument(s)

       

      ERROR [io.undertow.request] (default task-9) Blocking request failed HttpServerExchange{ GET /myapp/error.jsp}: java.lang.RuntimeException: java.lang.IllegalStateException: Transaction DummyTransaction{xid=DummyXid{id=6989}, status=1} is not in a valid state to be invoking cache operations on

        • 1. Re: Getting lots of exception related to infinispan while load testing
          Paul Ferraro Master

          Can you say more about your use case?  How exactly are you using Infinispan?

           

          The BatchModeTransactionManager (which uses DummyTransactions) is used when the transaction mode is NONE and batching is enabled.

          • 2. Re: Getting lots of exception related to infinispan while load testing
            Rishikesh Darandale Newbie

            I was testing high availability with load balancing while doing the load testing. When I put my application on load and after 5-10 minutes, I made one node down and the load was put on another with session replication. During this time I saw above mentioned exceptions.

            • 3. Re: Getting lots of exception related to infinispan while load testing
              Paul Ferraro Master

              Can you post the full stack traces for these 3 error messages?

              • 4. Re: Re: Getting lots of exception related to infinispan while load testing
                johnhpatton Newbie

                Here's the first one:

                 

                [Server:WILDFLYNODE] 16:27:23,669 ERROR [org.infinispan.transaction.tm.DummyTransaction] (default task-245) ISPN000111: afterCompletion() failed for SynchronizationAdapter{localTransaction=LocalTransaction{remoteLockedNodes=null, isMarkedForRollback=false, lockedKeys=null, backupKeyLocks=null, topologyId=2, isFromStateTransfer=false} org.infinispan.transaction.synchronization.SyncLocalTransaction@8d2d} org.infinispan.transaction.synchronization.SynchronizationAdapter@8d4c: org.infinispan.commons.CacheException: Could not commit.

                [Server:WILDFLYNODE]        at org.infinispan.transaction.synchronization.SynchronizationAdapter.afterCompletion(SynchronizationAdapter.java:60)

                [Server:WILDFLYNODE]        at org.infinispan.transaction.tm.DummyTransaction.notifyAfterCompletion(DummyTransaction.java:263)

                [Server:WILDFLYNODE]        at org.infinispan.transaction.tm.DummyTransaction.runCommitTx(DummyTransaction.java:312)

                [Server:WILDFLYNODE]        at org.infinispan.transaction.tm.DummyTransaction.commit(DummyTransaction.java:69)

                [Server:WILDFLYNODE]        at org.infinispan.transaction.tm.DummyBaseTransactionManager.commit(DummyBaseTransactionManager.java:80)

                [Server:WILDFLYNODE]        at org.infinispan.batch.BatchContainer.resolveTransaction(BatchContainer.java:101)

                [Server:WILDFLYNODE]        at org.infinispan.batch.BatchContainer.endBatch(BatchContainer.java:83)

                [Server:WILDFLYNODE]        at org.infinispan.batch.BatchContainer.endBatch(BatchContainer.java:64)

                [Server:WILDFLYNODE]        at org.infinispan.CacheImpl.endBatch(CacheImpl.java:777)

                [Server:WILDFLYNODE]        at org.infinispan.AbstractDelegatingCache.endBatch(AbstractDelegatingCache.java:53)

                [Server:WILDFLYNODE]        at org.wildfly.clustering.web.infinispan.InfinispanBatcher$1.end(InfinispanBatcher.java:56)

                [Server:WILDFLYNODE]        at org.wildfly.clustering.web.infinispan.InfinispanBatcher$1.close(InfinispanBatcher.java:46)

                [Server:WILDFLYNODE]        at org.wildfly.clustering.web.undertow.session.DistributableSession.requestDone(DistributableSession.java:72)

                [Server:WILDFLYNODE]        at io.undertow.servlet.spec.ServletContextImpl.updateSessionAccessTime(ServletContextImpl.java:704) [undertow-servlet-1.0.0.Final.jar:1.0.0.Final]

                [Server:WILDFLYNODE]        at io.undertow.servlet.spec.HttpServletResponseImpl.responseDone(HttpServletResponseImpl.java:522) [undertow-servlet-1.0.0.Final.jar:1.0.0.Final]

                [Server:WILDFLYNODE]        at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:287) [undertow-servlet-1.0.0.Final.jar:1.0.0.Final]

                [Server:WILDFLYNODE]        at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:227) [undertow-servlet-1.0.0.Final.jar:1.0.0.Final]

                [Server:WILDFLYNODE]        at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:73) [undertow-servlet-1.0.0.Final.jar:1.0.0.Final]

                [Server:WILDFLYNODE]        at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:146) [undertow-servlet-1.0.0.Final.jar:1.0.0.Final]

                [Server:WILDFLYNODE]        at io.undertow.server.Connectors.executeRootHandler(Connectors.java:168) [undertow-core-1.0.4.Final.jar:1.0.4.Final]

                [Server:WILDFLYNODE]        at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:727) [undertow-core-1.0.4.Final.jar:1.0.4.Final]

                [Server:WILDFLYNODE]        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_51]

                [Server:WILDFLYNODE]        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_51]

                [Server:WILDFLYNODE]        at java.lang.Thread.run(Thread.java:744) [rt.jar:1.7.0_51]

                [Server:WILDFLYNODE] Caused by: javax.transaction.xa.XAException

                [Server:WILDFLYNODE]        at org.infinispan.transaction.TransactionCoordinator.handleCommitFailure(TransactionCoordinator.java:204)

                [Server:WILDFLYNODE]        at org.infinispan.transaction.TransactionCoordinator.commit(TransactionCoordinator.java:156)

                [Server:WILDFLYNODE]        at org.infinispan.transaction.synchronization.SynchronizationAdapter.afterCompletion(SynchronizationAdapter.java:58)

                [Server:WILDFLYNODE]        ... 23 more

                 

                Here's the second one:

                 

                [Server:WILDFLYNODE] 15:31:17,912 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (default task-162) ISPN000136: Execution error: org.infinispan.commons.CacheException: java.lang.RuntimeException: Failure to marshal argument(s)

                [Server:WILDFLYNODE]        at org.infinispan.commons.util.Util.rewrapAsCacheException(Util.java:581)

                [Server:WILDFLYNODE]        at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:176)

                [Server:WILDFLYNODE]        at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:521)

                [Server:WILDFLYNODE]        at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:281)

                [Server:WILDFLYNODE]        at org.infinispan.interceptors.distribution.TxDistributionInterceptor.prepareOnAffectedNodes(TxDistributionInterceptor.java:219)

                [Server:WILDFLYNODE]        at org.infinispan.interceptors.distribution.TxDistributionInterceptor.visitPrepareCommand(TxDistributionInterceptor.java:203)

                [Server:WILDFLYNODE]        at org.infinispan.commands.tx.PrepareCommand.acceptVisitor(PrepareCommand.java:125)

                [Server:WILDFLYNODE]        at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98)

                [Server:WILDFLYNODE]        at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:112)

                [Server:WILDFLYNODE]        at org.infinispan.commands.AbstractVisitor.visitPrepareCommand(AbstractVisitor.java:96)

                [Server:WILDFLYNODE]        at org.infinispan.commands.tx.PrepareCommand.acceptVisitor(PrepareCommand.java:125)

                [Server:WILDFLYNODE]        at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98)

                [Server:WILDFLYNODE]        at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:112)

                [Server:WILDFLYNODE]        at org.infinispan.commands.AbstractVisitor.visitPrepareCommand(AbstractVisitor.java:96)

                [Server:WILDFLYNODE]        at org.infinispan.commands.tx.PrepareCommand.acceptVisitor(PrepareCommand.java:125)

                [Server:WILDFLYNODE]        at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98)

                [Server:WILDFLYNODE]        at org.infinispan.interceptors.EntryWrappingInterceptor.visitPrepareCommand(EntryWrappingInterceptor.java:96)

                [Server:WILDFLYNODE]        at org.infinispan.commands.tx.PrepareCommand.acceptVisitor(PrepareCommand.java:125)

                [Server:WILDFLYNODE]        at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98)

                [Server:WILDFLYNODE]        at org.infinispan.interceptors.locking.AbstractTxLockingInterceptor.invokeNextAndCommitIf1Pc(AbstractTxLockingInterceptor.java:78)

                [Server:WILDFLYNODE]        at org.infinispan.interceptors.locking.PessimisticLockingInterceptor.visitPrepareCommand(PessimisticLockingInterceptor.java:83)

                [Server:WILDFLYNODE]        at org.infinispan.commands.tx.PrepareCommand.acceptVisitor(PrepareCommand.java:125)

                [Server:WILDFLYNODE]        at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98)

                [Server:WILDFLYNODE]        at org.infinispan.interceptors.NotificationInterceptor.visitPrepareCommand(NotificationInterceptor.java:36)

                [Server:WILDFLYNODE]        at org.infinispan.commands.tx.PrepareCommand.acceptVisitor(PrepareCommand.java:125)

                [Server:WILDFLYNODE]        at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98)

                [Server:WILDFLYNODE]        at org.infinispan.interceptors.TxInterceptor.invokeNextInterceptorAndVerifyTransaction(TxInterceptor.java:114)

                [Server:WILDFLYNODE]        at org.infinispan.interceptors.TxInterceptor.visitPrepareCommand(TxInterceptor.java:101)

                [Server:WILDFLYNODE]        at org.infinispan.commands.tx.PrepareCommand.acceptVisitor(PrepareCommand.java:125)

                [Server:WILDFLYNODE]        at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98)

                [Server:WILDFLYNODE]        at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:112)

                [Server:WILDFLYNODE]        at org.infinispan.commands.AbstractVisitor.visitPrepareCommand(AbstractVisitor.java:96)

                [Server:WILDFLYNODE]        at org.infinispan.statetransfer.TransactionSynchronizerInterceptor.visitPrepareCommand(TransactionSynchronizerInterceptor.java:42)

                [Server:WILDFLYNODE]        at org.infinispan.commands.tx.PrepareCommand.acceptVisitor(PrepareCommand.java:125)

                [Server:WILDFLYNODE]        at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98)

                [Server:WILDFLYNODE]        at org.infinispan.statetransfer.StateTransferInterceptor.handleTopologyAffectedCommand(StateTransferInterceptor.java:263)

                [Server:WILDFLYNODE]        at org.infinispan.statetransfer.StateTransferInterceptor.handleTxCommand(StateTransferInterceptor.java:194)

                [Server:WILDFLYNODE]        at org.infinispan.statetransfer.StateTransferInterceptor.visitPrepareCommand(StateTransferInterceptor.java:94)

                [Server:WILDFLYNODE]        at org.infinispan.commands.tx.PrepareCommand.acceptVisitor(PrepareCommand.java:125)

                [Server:WILDFLYNODE]        at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98)

                [Server:WILDFLYNODE]        at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:112)

                [Server:WILDFLYNODE]        at org.infinispan.commands.AbstractVisitor.visitPrepareCommand(AbstractVisitor.java:96)

                [Server:WILDFLYNODE]        at org.infinispan.commands.tx.PrepareCommand.acceptVisitor(PrepareCommand.java:125)

                [Server:WILDFLYNODE]        at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98)

                [Server:WILDFLYNODE]        at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:110)

                [Server:WILDFLYNODE]        at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:73)

                [Server:WILDFLYNODE]        at org.infinispan.commands.AbstractVisitor.visitPrepareCommand(AbstractVisitor.java:96)

                [Server:WILDFLYNODE]        at org.infinispan.commands.tx.PrepareCommand.acceptVisitor(PrepareCommand.java:125)

                [Server:WILDFLYNODE]        at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:98)

                [Server:WILDFLYNODE]        at org.infinispan.interceptors.BatchingInterceptor.handleDefault(BatchingInterceptor.java:66)

                [Server:WILDFLYNODE]        at org.infinispan.commands.AbstractVisitor.visitPrepareCommand(AbstractVisitor.java:96)

                [Server:WILDFLYNODE]        at org.infinispan.commands.tx.PrepareCommand.acceptVisitor(PrepareCommand.java:125)

                [Server:WILDFLYNODE]        at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:333)

                [Server:WILDFLYNODE]        at org.infinispan.transaction.TransactionCoordinator.commit(TransactionCoordinator.java:154)

                [Server:WILDFLYNODE]        at org.infinispan.transaction.synchronization.SynchronizationAdapter.afterCompletion(SynchronizationAdapter.java:58)

                [Server:WILDFLYNODE]        at org.infinispan.transaction.tm.DummyTransaction.notifyAfterCompletion(DummyTransaction.java:263)

                [Server:WILDFLYNODE]        at org.infinispan.transaction.tm.DummyTransaction.runCommitTx(DummyTransaction.java:312)

                [Server:WILDFLYNODE]        at org.infinispan.transaction.tm.DummyTransaction.commit(DummyTransaction.java:69)

                [Server:WILDFLYNODE]        at org.infinispan.transaction.tm.DummyBaseTransactionManager.commit(DummyBaseTransactionManager.java:80)

                [Server:WILDFLYNODE]        at org.infinispan.batch.BatchContainer.resolveTransaction(BatchContainer.java:101)

                [Server:WILDFLYNODE]        at org.infinispan.batch.BatchContainer.endBatch(BatchContainer.java:83)

                [Server:WILDFLYNODE]        at org.infinispan.batch.BatchContainer.endBatch(BatchContainer.java:64)

                [Server:WILDFLYNODE]        at org.infinispan.CacheImpl.endBatch(CacheImpl.java:777)

                [Server:WILDFLYNODE]        at org.infinispan.AbstractDelegatingCache.endBatch(AbstractDelegatingCache.java:53)

                [Server:WILDFLYNODE]        at org.wildfly.clustering.web.infinispan.InfinispanBatcher$1.end(InfinispanBatcher.java:56)

                [Server:WILDFLYNODE]        at org.wildfly.clustering.web.infinispan.InfinispanBatcher$1.close(InfinispanBatcher.java:46)

                [Server:WILDFLYNODE]        at org.wildfly.clustering.web.undertow.session.DistributableSession.requestDone(DistributableSession.java:72)

                [Server:WILDFLYNODE]        at io.undertow.servlet.spec.ServletContextImpl.updateSessionAccessTime(ServletContextImpl.java:704) [undertow-servlet-1.0.0.Final.jar:1.0.0.Final]

                [Server:WILDFLYNODE]        at io.undertow.servlet.spec.HttpServletResponseImpl.responseDone(HttpServletResponseImpl.java:522) [undertow-servlet-1.0.0.Final.jar:1.0.0.Final]

                [Server:WILDFLYNODE]        at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:287) [undertow-servlet-1.0.0.Final.jar:1.0.0.Final]

                [Server:WILDFLYNODE]        at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:227) [undertow-servlet-1.0.0.Final.jar:1.0.0.Final]

                [Server:WILDFLYNODE]        at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:73) [undertow-servlet-1.0.0.Final.jar:1.0.0.Final]

                [Server:WILDFLYNODE]        at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:146) [undertow-servlet-1.0.0.Final.jar:1.0.0.Final]

                [Server:WILDFLYNODE]        at io.undertow.server.Connectors.executeRootHandler(Connectors.java:168) [undertow-core-1.0.4.Final.jar:1.0.4.Final]

                [Server:WILDFLYNODE]        at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:727) [undertow-core-1.0.4.Final.jar:1.0.4.Final]

                [Server:WILDFLYNODE]        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_51]

                [Server:WILDFLYNODE]        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_51]

                [Server:WILDFLYNODE]        at java.lang.Thread.run(Thread.java:744) [rt.jar:1.7.0_51]

                [Server:WILDFLYNODE] Caused by: java.lang.RuntimeException: Failure to marshal argument(s)

                [Server:WILDFLYNODE]        at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.marshallCall(CommandAwareRpcDispatcher.java:333)

                [Server:WILDFLYNODE]        at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.processSingleCall(CommandAwareRpcDispatcher.java:352)

                [Server:WILDFLYNODE]        at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.invokeRemoteCommand(CommandAwareRpcDispatcher.java:167)

                [Server:WILDFLYNODE]        ... 76 more

                [Server:WILDFLYNODE] Caused by: java.util.ConcurrentModificationException

                [Server:WILDFLYNODE]        at java.util.HashMap$HashIterator.nextEntry(HashMap.java:926) [rt.jar:1.7.0_51]

                [Server:WILDFLYNODE]        at java.util.HashMap$EntryIterator.next(HashMap.java:966) [rt.jar:1.7.0_51]

                [Server:WILDFLYNODE]        at java.util.HashMap$EntryIterator.next(HashMap.java:964) [rt.jar:1.7.0_51]

                [Server:WILDFLYNODE]        at org.jboss.marshalling.river.RiverMarshaller.doWriteObject(RiverMarshaller.java:681)

                [Server:WILDFLYNODE]        at org.jboss.marshalling.AbstractObjectOutput.writeObject(AbstractObjectOutput.java:62)

                [Server:WILDFLYNODE]        at org.jboss.marshalling.AbstractMarshaller.writeObject(AbstractMarshaller.java:115)

                [Server:WILDFLYNODE]        at org.jboss.as.clustering.marshalling.SimpleMarshalledValue.getBytes(SimpleMarshalledValue.java:77)

                [Server:WILDFLYNODE]        at org.jboss.as.clustering.marshalling.SimpleMarshalledValue.writeExternal(SimpleMarshalledValue.java:150)

                [Server:WILDFLYNODE]        at org.jboss.marshalling.river.RiverMarshaller.doWriteObject(RiverMarshaller.java:876)

                [Server:WILDFLYNODE]        at org.jboss.marshalling.AbstractObjectOutput.writeObject(AbstractObjectOutput.java:62)

                [Server:WILDFLYNODE]        at org.jboss.marshalling.AbstractMarshaller.writeObject(AbstractMarshaller.java:115)

                [Server:WILDFLYNODE]        at org.infinispan.marshall.exts.ReplicableCommandExternalizer.writeCommandParameters(ReplicableCommandExternalizer.java:57)

                [Server:WILDFLYNODE]        at org.infinispan.marshall.exts.ReplicableCommandExternalizer.writeObject(ReplicableCommandExternalizer.java:42)

                [Server:WILDFLYNODE]        at org.infinispan.marshall.exts.ReplicableCommandExternalizer.writeObject(ReplicableCommandExternalizer.java:30)

                [Server:WILDFLYNODE]        at org.infinispan.marshall.core.ExternalizerTable$ExternalizerAdapter.writeObject(ExternalizerTable.java:395)

                [Server:WILDFLYNODE]        at org.jboss.marshalling.river.RiverMarshaller.doWriteObject(RiverMarshaller.java:148)

                [Server:WILDFLYNODE]        at org.jboss.marshalling.AbstractObjectOutput.writeObject(AbstractObjectOutput.java:62)

                [Server:WILDFLYNODE]        at org.jboss.marshalling.AbstractMarshaller.writeObject(AbstractMarshaller.java:115)

                [Server:WILDFLYNODE]        at org.infinispan.marshall.exts.ReplicableCommandExternalizer.writeCommandParameters(ReplicableCommandExternalizer.java:57)

                [Server:WILDFLYNODE]        at org.infinispan.marshall.exts.CacheRpcCommandExternalizer.marshallParameters(CacheRpcCommandExternalizer.java:116)

                [Server:WILDFLYNODE]        at org.infinispan.marshall.exts.CacheRpcCommandExternalizer.writeObject(CacheRpcCommandExternalizer.java:100)

                [Server:WILDFLYNODE]        at org.infinispan.marshall.exts.CacheRpcCommandExternalizer.writeObject(CacheRpcCommandExternalizer.java:59)

                [Server:WILDFLYNODE]        at org.infinispan.marshall.core.ExternalizerTable$ExternalizerAdapter.writeObject(ExternalizerTable.java:395)

                [Server:WILDFLYNODE]        at org.jboss.marshalling.river.RiverMarshaller.doWriteObject(RiverMarshaller.java:148)

                [Server:WILDFLYNODE]        at org.jboss.marshalling.AbstractObjectOutput.writeObject(AbstractObjectOutput.java:62)

                [Server:WILDFLYNODE]        at org.jboss.marshalling.AbstractMarshaller.writeObject(AbstractMarshaller.java:115)

                [Server:WILDFLYNODE]        at org.infinispan.commons.marshall.jboss.AbstractJBossMarshaller.objectToObjectStream(AbstractJBossMarshaller.java:74)

                [Server:WILDFLYNODE]        at org.infinispan.marshall.core.VersionAwareMarshaller.objectToBuffer(VersionAwareMarshaller.java:77)

                [Server:WILDFLYNODE]        at org.infinispan.commons.marshall.AbstractMarshaller.objectToBuffer(AbstractMarshaller.java:41)

                [Server:WILDFLYNODE]        at org.infinispan.commons.marshall.AbstractDelegatingMarshaller.objectToBuffer(AbstractDelegatingMarshaller.java:85)

                [Server:WILDFLYNODE]        at org.infinispan.remoting.transport.jgroups.MarshallerAdapter.objectToBuffer(MarshallerAdapter.java:23)

                [Server:WILDFLYNODE]        at org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher.marshallCall(CommandAwareRpcDispatcher.java:331)

                [Server:WILDFLYNODE]        ... 78 more

                [Server:WILDFLYNODE] Caused by: an exception which occurred:

                [Server:WILDFLYNODE]        in object java.util.HashMap@b2b1787

                [Server:WILDFLYNODE]        in object org.jboss.as.clustering.marshalling.SimpleMarshalledValue@b2b1787

                [Server:WILDFLYNODE]        in object org.infinispan.commands.write.ReplaceCommand@5db6f745

                [Server:WILDFLYNODE]        in object org.infinispan.commands.tx.PrepareCommand@716c9c16

                 

                 

                Here's the third one:

                 

                [Server:WILDFLYNODE] 15:53:03,796 ERROR [io.undertow.request] (default task-167) Blocking request failed HttpServerExchange{ GET /CONTEXTROOT/dwr/interface/ContactUsAction.js}: java.util.ConcurrentModificationException

                [Server:WILDFLYNODE]        at java.util.HashMap$HashIterator.nextEntry(HashMap.java:926) [rt.jar:1.7.0_51]

                [Server:WILDFLYNODE]        at java.util.HashMap$KeyIterator.next(HashMap.java:960) [rt.jar:1.7.0_51]

                [Server:WILDFLYNODE]        at org.wildfly.clustering.web.infinispan.session.InfinispanSessionManager.findListeners(InfinispanSessionManager.java:322)

                [Server:WILDFLYNODE]        at org.wildfly.clustering.web.infinispan.session.InfinispanSessionManager.triggerPrePassivationEvents(InfinispanSessionManager.java:299)

                [Server:WILDFLYNODE]        at org.wildfly.clustering.web.infinispan.session.InfinispanSessionManager$SchedulableSession.close(InfinispanSessionManager.java:376)

                [Server:WILDFLYNODE]        at org.wildfly.clustering.web.undertow.session.DistributableSession.requestDone(DistributableSession.java:71)

                [Server:WILDFLYNODE]        at io.undertow.servlet.spec.ServletContextImpl.updateSessionAccessTime(ServletContextImpl.java:704)

                [Server:WILDFLYNODE]        at io.undertow.servlet.spec.HttpServletResponseImpl.responseDone(HttpServletResponseImpl.java:522)

                [Server:WILDFLYNODE]        at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:287)

                [Server:WILDFLYNODE]        at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:227)

                [Server:WILDFLYNODE]        at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:73)

                [Server:WILDFLYNODE]        at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:146)

                [Server:WILDFLYNODE]        at io.undertow.server.Connectors.executeRootHandler(Connectors.java:168)

                [Server:WILDFLYNODE]        at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:727)

                [Server:WILDFLYNODE]        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_51]

                [Server:WILDFLYNODE]        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_51]

                [Server:WILDFLYNODE]        at java.lang.Thread.run(Thread.java:744) [rt.jar:1.7.0_51]

                • 5. Re: Re: Getting lots of exception related to infinispan while load testing
                  Paul Ferraro Master

                  It looks like these errors are due to the rate at which you are accessing sessions.  By default, WF replicates sessions *asynchronously*.  While this default adequate for real world web usage, it can often lead to issues like the above during stress/load tests, where sessions are often accessed far more rapidly, i.e. faster than replication happens.

                  I've improved the resiliency of ASYNC mode to prevent concurrent session access in WF 8.1:

                  WFLY-3136 SRV 7.7.2 non-compliance · 40d73b2 · wildfly/wildfly · GitHub

                  Optimize WFLY-3136 fix. · 326dd26 · wildfly/wildfly · GitHub

                   

                  For 8.0.0.Final, you can try switching the cache mode of the default cache of your web cache container from ASYNC to SYNC.

                  • 6. Re: Re: Re: Getting lots of exception related to infinispan while load testing
                    johnhpatton Newbie

                    We found the source of all of these exceptions.  They were all related to our infinispan configuration.  I'll post this here in case others have a similar issue.

                     

                    Environment:  HA, using TCP stack for jgroups

                     

                    This was our original configuration for the "web" cache container:

                     

                                    <cache-container name="web" default-cache="dist" module="org.wildfly.clustering.web.infinispan">

                                        <transport stack="tcp" cluster="${jboss.cluster.group.name}" lock-timeout="60000"/>

                                        <distributed-cache name="dist" batching="true" mode="ASYNC" owners="4" l1-lifespan="0">

                                            <locking isolation="REPEATABLE_READ"/>

                                            <file-store/>

                                        </distributed-cache>

                                    </cache-container>

                     

                    After reading and making numerous ephemeral changes that made the problem worse, we stumbled across this description of the "owners" setting:

                     

                    When using DIST mode, the session will only be stored on X nodes in the cluster, where X is determined by the owners attribute.  *any* node can query the cache and retrieve the session.  In the case where this query is performed on a node that is *not* an owner, an RPC is made to retrieve the value from one of the owners.  Hence, when you visit any server in your cluster, a given session is always accessible.

                     

                    I realized the ConcurrentModificationException sounded like it could be related to multiple nodes modifying the same session since they could all potentially be "owners" and adjusted the config like so:

                     

                                    <cache-container name="web" default-cache="dist" module="org.wildfly.clustering.web.infinispan">

                                        <transport stack="tcp" cluster="${jboss.cluster.group.name}" lock-timeout="60000"/>

                                        <distributed-cache name="dist" batching="true" mode="ASYNC" owners="1" l1-lifespan="0">

                                            <locking isolation="REPEATABLE_READ"/>

                                            <file-store/>

                                        </distributed-cache>

                                    </cache-container>

                     

                    We're not entirely sure why the default for owners is 4, but I think it should be 1.

                    • 7. Re: Re: Re: Re: Getting lots of exception related to infinispan while load testing
                      johnhpatton Newbie

                      We adjusted our config to use SYNC instead of ASYNC, changed to use owners="2" for the 2 nodes we have, and received this exception again:

                       

                       

                      [Server:WILDFLYNODE] 11:13:23,861 ERROR [org.infinispan.transaction.tm.DummyTransaction] (default task-150) ISPN000111: afterCompletion() failed for SynchronizationAdapter{localTransaction=LocalTransaction{remoteLockedNodes=[slave:WILDFLYNODE/wildfly-cluster, master_dc:WILDFLYNODE/wildfly-cluster], isMarkedForRollback=false, lockedKeys=null, backupKeyLocks=[8VcOSp9NyOfazTlyB6kpIw9d], topologyId=2, isFromStateTransfer=false} org.infinispan.transaction.synchronization.SyncLocalTransaction@1173} org.infinispan.transaction.synchronization.SynchronizationAdapter@1192: org.infinispan.commons.CacheException: Could not commit.

                      [Server:WILDFLYNODE]    at org.infinispan.transaction.synchronization.SynchronizationAdapter.afterCompletion(SynchronizationAdapter.java:60)

                      [Server:WILDFLYNODE]    at org.infinispan.transaction.tm.DummyTransaction.notifyAfterCompletion(DummyTransaction.java:263)

                      [Server:WILDFLYNODE]    at org.infinispan.transaction.tm.DummyTransaction.runCommitTx(DummyTransaction.java:312)

                      [Server:WILDFLYNODE]    at org.infinispan.transaction.tm.DummyTransaction.commit(DummyTransaction.java:69)

                      [Server:WILDFLYNODE]    at org.infinispan.transaction.tm.DummyBaseTransactionManager.commit(DummyBaseTransactionManager.java:80)

                      [Server:WILDFLYNODE]    at org.infinispan.batch.BatchContainer.resolveTransaction(BatchContainer.java:101)

                      [Server:WILDFLYNODE]    at org.infinispan.batch.BatchContainer.endBatch(BatchContainer.java:83)

                      [Server:WILDFLYNODE]    at org.infinispan.batch.BatchContainer.endBatch(BatchContainer.java:64)

                      [Server:WILDFLYNODE]    at org.infinispan.CacheImpl.endBatch(CacheImpl.java:777)

                      [Server:WILDFLYNODE]    at org.infinispan.AbstractDelegatingCache.endBatch(AbstractDelegatingCache.java:53)

                      [Server:WILDFLYNODE]    at org.wildfly.clustering.web.infinispan.InfinispanBatcher$1.end(InfinispanBatcher.java:56)

                      [Server:WILDFLYNODE]    at org.wildfly.clustering.web.infinispan.InfinispanBatcher$1.close(InfinispanBatcher.java:46)

                      [Server:WILDFLYNODE]    at org.wildfly.clustering.web.undertow.session.DistributableSession.requestDone(DistributableSession.java:72)

                      [Server:WILDFLYNODE]    at io.undertow.servlet.spec.ServletContextImpl.updateSessionAccessTime(ServletContextImpl.java:704) [undertow-servlet-1.0.0.Final.jar:1.0.0.Final]

                      [Server:WILDFLYNODE]    at io.undertow.servlet.spec.HttpServletResponseImpl.responseDone(HttpServletResponseImpl.java:522) [undertow-servlet-1.0.0.Final.jar:1.0.0.Final]

                      [Server:WILDFLYNODE]    at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:287) [undertow-servlet-1.0.0.Final.jar:1.0.0.Final]

                      [Server:WILDFLYNODE]    at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:227) [undertow-servlet-1.0.0.Final.jar:1.0.0.Final]

                      [Server:WILDFLYNODE]    at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:73) [undertow-servlet-1.0.0.Final.jar:1.0.0.Final]

                      [Server:WILDFLYNODE]    at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:146) [undertow-servlet-1.0.0.Final.jar:1.0.0.Final]

                      [Server:WILDFLYNODE]    at io.undertow.server.Connectors.executeRootHandler(Connectors.java:168) [undertow-core-1.0.4.Final.jar:1.0.4.Final]

                      [Server:WILDFLYNODE]    at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:727) [undertow-core-1.0.4.Final.jar:1.0.4.Final]

                      [Server:WILDFLYNODE]    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_51]

                      [Server:WILDFLYNODE]    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_51]

                      [Server:WILDFLYNODE]    at java.lang.Thread.run(Thread.java:744) [rt.jar:1.7.0_51]

                      [Server:WILDFLYNODE] Caused by: javax.transaction.xa.XAException

                      [Server:WILDFLYNODE]    at org.infinispan.transaction.TransactionCoordinator.handleCommitFailure(TransactionCoordinator.java:204)

                      [Server:WILDFLYNODE]    at org.infinispan.transaction.TransactionCoordinator.commit(TransactionCoordinator.java:156)

                      [Server:WILDFLYNODE]    at org.infinispan.transaction.synchronization.SynchronizationAdapter.afterCompletion(SynchronizationAdapter.java:58)

                      [Server:WILDFLYNODE]    ... 23 more

                       

                      We've left it SYNC, kept the state-transfer timeout setting, and moved back to owners="1":

                       

                                      <cache-container name="web" default-cache="dist" module="org.wildfly.clustering.web.infinispan">

                                          <transport stack="tcp" cluster="${jboss.cluster.group.name}" lock-timeout="60000"/>

                                          <distributed-cache name="dist" batching="true" mode="SYNC" owners="1" l1-lifespan="0">

                                               <state-transfer timeout="120000"/>

                                              <locking isolation="REPEATABLE_READ"/>

                                              <file-store/>

                                          </distributed-cache>

                                      </cache-container>

                      • 8. Re: Getting lots of exception related to infinispan while load testing
                        Paul Ferraro Master

                        I explained previously, how asynchronous replication can cause ConcurrentModificationExceptions if requests for the same session arrive faster than the system can replicate them.  I also explained the measures we've taken to alleviate this condition in 8.1.

                        Setting owners="1" is a terrible idea.  The only reason this resolves the CMEs is because sessions are not longer replicated!  owners="1" means that a given web session will only ever be stored on 1 node in your cluster (specifically, the node that created the session).  Therefore, if your cluster loses a node, any web sessions owned by that node will be lost.  WildFly will be unable to failover to another node, since no other node has a copy of those sessions!

                         

                        Can you describe the load scenario used in your tests?  e.g. how many sessions?  how many requests/sec?  how many concurrent requests?, etc.  Chances are you need to increase your JGroups OOB thread pool size to match the expected synchronous RPC load.

                        • 9. Re: Re: Re: Re: Getting lots of exception related to infinispan while load testing
                          johnhpatton Newbie

                          Hey Paul,

                           

                          Yeah, I agree with everything you're saying and completely understand what the impact is.  This was primarily to get it to work...

                           

                          We're handling 100+ requests per second on 2 nodes, 20 concurrent requests.  As for the async vs sync, we did switch it the SYNC.  Which setting handles JGRoups OOB thread pool size?  Also, keep in mind that we're using TCP stack for jgroups:

                           

                                          <stack name="tcp">

                                              <transport type="TCP" socket-binding="jgroups-tcp"/>

                                              <protocol type="TCPPING">

                                                  <property name="initial_hosts">

                                                      ${jgroups.ha.tcpping.initial_hosts}

                                                  </property>

                                                  <property name="num_initial_members">

                                                      ${jgroups.ha.tcpping.num_initial_members}

                                                  </property>

                                                  <property name="timeout">

                                                      ${jgroups.ha.tcpping.timeout}

                                                  </property>

                                              </protocol>

                                              <protocol type="MERGE2"/>

                                              <protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>

                                              <protocol type="FD"/>

                                              <protocol type="VERIFY_SUSPECT"/>

                                              <protocol type="pbcast.NAKACK2"/>

                                              <protocol type="UNICAST3"/>

                                              <protocol type="pbcast.STABLE"/>

                                              <protocol type="pbcast.GMS"/>

                                              <protocol type="MFC"/>

                                              <protocol type="FRAG2"/>

                                              <protocol type="RSVP"/>

                                          </stack>

                           

                          Also: THANKS SO MUCH for helping us work through this.  We're doing some insane testing here because Googlebot slams us pretty hard in the middle of the night and we want to make sure we're able to sleep soundly.

                          • 10. Re: Re: Re: Re: Re: Getting lots of exception related to infinispan while load testing
                            Paul Ferraro Master

                            The purpose of using SYNC was to prevent the CMEs if requests for the same session are triggered too quickly.  The side effect, of course, is that each request takes longer, since the request will not return until the session is successfully replicated to the other owners.  Infinispan flags sync messages as OOB, meaning that they will be handled by the JGroups OOB thread pool, rather than the default thread pool for incoming messages.

                            The easiest way to increase the size of the OOB thread pool is via the jgroups subsystem:

                             

                            <subsystem xmlns="urn:jboss:domain:jgroups:2.0" default-stack="tcp">
                              <stack name="tcp">
                                <transport type="TCP" socket-binding="...">
                                  <property name="oob_thread_pool.max_threads">300</property>
                                </transport>
                                <!-- ... -->
                              </stack>
                            </subsystem>
                            
                            
                            
                            
                            

                             

                            The default max size is 300, which ought to already be more than enough to handle 40 concurrent requests (i.e. 20x2).

                            • 11. Re: Re: Re: Re: Re: Getting lots of exception related to infinispan while load testing
                              Rituraj Sinha Novice

                              Thanks Paul for all the informations ....what would be the configurations if we are using TCP

                              stack rather than UDP ....as in our case we are using Tcp stack and tcpping for clustering on ha profile?

                               

                              thanks

                              Rituraj

                              • 12. Re: Re: Re: Re: Re: Getting lots of exception related to infinispan while load testing
                                Paul Ferraro Master

                                I updated my previous post after realizing that you were using the tcp-based stack.

                                • 13. Re: Re: Re: Re: Re: Re: Getting lots of exception related to infinispan while load testing
                                  Paul Ferraro Master

                                  Also, if you require a pure tcp stack (i.e. your network forbids multicast), make sure to modify your NAKACK protocol accordingly:

                                  e.g.

                                  <protocol type="pbcast.NAKACK2">
                                   <property name="use_mcast_xmit">false</property>
                                   <property name="use_mcast_xmit_req">false</property>
                                  </protocol>
                                  
                                  • 14. Re: Re: Getting lots of exception related to infinispan while load testing
                                    johnhpatton Newbie

                                    Here's the config changes, and there are 2 nodes, 1 master, 1 slave.  jgroups.ha.tcpping.initial_hosts holds the ip1[port],ip2[port] value, jgroups.ha.tcpping.num_initial_members is set to 2, jgroups.ha.tcpping.timeout is set to 3000.

                                     

                                    Cache Container:

                                     

                                                    <cache-container name="web" default-cache="dist" module="org.wildfly.clustering.web.infinispan">

                                                        <transport stack="tcp" cluster="${jboss.cluster.group.name}" lock-timeout="60000"/>

                                                        <distributed-cache name="dist" batching="true" mode="SYNC" owners="2" l1-lifespan="0">

                                                             <state-transfer timeout="120000"/>

                                                            <locking isolation="REPEATABLE_READ"/>

                                                            <file-store/>

                                                        </distributed-cache>

                                                    </cache-container>

                                     

                                    Jgroups Element:

                                     

                                                <subsystem xmlns="urn:jboss:domain:jgroups:2.0" default-stack="tcp">

                                                    <stack name="tcp">

                                                        <transport type="TCP" socket-binding="jgroups-tcp">

                                                            <property name="oob_thread_pool.max_threads">300</property>

                                                        </transport>

                                                        <protocol type="TCPPING">

                                                            <property name="initial_hosts">

                                                                ${jgroups.ha.tcpping.initial_hosts}

                                                            </property>

                                                            <property name="num_initial_members">

                                                                ${jgroups.ha.tcpping.num_initial_members}

                                                            </property>

                                                            <property name="timeout">

                                                                ${jgroups.ha.tcpping.timeout}

                                                            </property>

                                                        </protocol>

                                                        <protocol type="MERGE2"/>

                                                        <protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>

                                                        <protocol type="FD"/>

                                                        <protocol type="VERIFY_SUSPECT"/>

                                                        <protocol type="pbcast.NAKACK2"/>

                                                        <protocol type="UNICAST3"/>

                                                        <protocol type="pbcast.STABLE"/>

                                                        <protocol type="pbcast.GMS"/>

                                                        <protocol type="MFC"/>

                                                        <protocol type="FRAG2"/>

                                                        <protocol type="RSVP"/>

                                                    </stack>

                                                </subsystem>

                                     

                                    TCP Stack:

                                     

                                                    <stack name="tcp">

                                                        <transport type="TCP" socket-binding="jgroups-tcp">

                                                            <property name="oob_thread_pool.max_threads">300</property>

                                                        </transport>

                                                        <protocol type="TCPPING">

                                                            <property name="initial_hosts">

                                                                ${jgroups.ha.tcpping.initial_hosts}

                                                            </property>

                                                            <property name="num_initial_members">

                                                                ${jgroups.ha.tcpping.num_initial_members}

                                                            </property>

                                                            <property name="timeout">

                                                                ${jgroups.ha.tcpping.timeout}

                                                            </property>

                                                        </protocol>

                                                        <protocol type="MERGE2"/>

                                                        <protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>

                                                        <protocol type="FD"/>

                                                        <protocol type="VERIFY_SUSPECT"/>

                                                        <protocol type="pbcast.NAKACK2"/>

                                                        <protocol type="UNICAST3"/>

                                                        <protocol type="pbcast.STABLE"/>

                                                        <protocol type="pbcast.GMS"/>

                                                        <protocol type="MFC"/>

                                                        <protocol type="FRAG2"/>

                                                        <protocol type="RSVP"/>

                                                    </stack>

                                     

                                     

                                    Still getting this, but not nearly as frequently:

                                    [Server:WILDFLYNODE1] 15:05:54,235 ERROR [io.undertow.request] (default task-163) UT005023: Exception handling request to /CONTEXTROOT/dwr/interface/ContactUsAction.js: java.util.ConcurrentModificationException

                                    [Server:WILDFLYNODE1]   at java.util.HashMap$HashIterator.nextEntry(HashMap.java:926) [rt.jar:1.7.0_51]

                                    [Server:WILDFLYNODE1]   at java.util.HashMap$KeyIterator.next(HashMap.java:960) [rt.jar:1.7.0_51]

                                    [Server:WILDFLYNODE1]   at org.wildfly.clustering.web.infinispan.session.InfinispanSessionManager.findListeners(InfinispanSessionManager.java:322)

                                    [Server:WILDFLYNODE1]   at org.wildfly.clustering.web.infinispan.session.InfinispanSessionManager.triggerPostActivationEvents(InfinispanSessionManager.java:309)

                                    [Server:WILDFLYNODE1]   at org.wildfly.clustering.web.infinispan.session.InfinispanSessionManager.findSession(InfinispanSessionManager.java:164)

                                    [Server:WILDFLYNODE1]   at org.wildfly.clustering.web.undertow.session.DistributableSessionManager.getSession(DistributableSessionManager.java:110)

                                    [Server:WILDFLYNODE1]   at io.undertow.servlet.spec.ServletContextImpl.getSession(ServletContextImpl.java:673) [undertow-servlet-1.0.0.Final.jar:1.0.0.Final]

                                    [Server:WILDFLYNODE1]   at io.undertow.servlet.spec.ServletContextImpl.getSession(ServletContextImpl.java:692) [undertow-servlet-1.0.0.Final.jar:1.0.0.Final]

                                    [Server:WILDFLYNODE1]   at io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:62) [undertow-servlet-1.0.0.Final.jar:1.0.0.Final]

                                    [Server:WILDFLYNODE1]   at io.undertow.security.handlers.SecurityInitialHandler.handleRequest(SecurityInitialHandler.java:76) [undertow-core-1.0.4.Final.jar:1.0.4.Final]

                                    [Server:WILDFLYNODE1]   at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:25) [undertow-core-1.0.4.Final.jar:1.0.4.Final]

                                    [Server:WILDFLYNODE1]   at org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61)

                                    [Server:WILDFLYNODE1]   at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:25) [undertow-core-1.0.4.Final.jar:1.0.4.Final]

                                    [Server:WILDFLYNODE1]   at org.wildfly.mod_cluster.undertow.metric.RunningRequestsHttpHandler.handleRequest(RunningRequestsHttpHandler.java:68)

                                    [Server:WILDFLYNODE1]   at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:25) [undertow-core-1.0.4.Final.jar:1.0.4.Final]

                                    [Server:WILDFLYNODE1]   at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:240) [undertow-servlet-1.0.0.Final.jar:1.0.0.Final]

                                    [Server:WILDFLYNODE1]   at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:227) [undertow-servlet-1.0.0.Final.jar:1.0.0.Final]

                                    [Server:WILDFLYNODE1]   at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:73) [undertow-servlet-1.0.0.Final.jar:1.0.0.Final]

                                    [Server:WILDFLYNODE1]   at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:146) [undertow-servlet-1.0.0.Final.jar:1.0.0.Final]

                                    [Server:WILDFLYNODE1]   at io.undertow.server.Connectors.executeRootHandler(Connectors.java:168) [undertow-core-1.0.4.Final.jar:1.0.4.Final]

                                    [Server:WILDFLYNODE1]   at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:727) [undertow-core-1.0.4.Final.jar:1.0.4.Final]

                                    [Server:WILDFLYNODE1]   at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_51]

                                    [Server:WILDFLYNODE1]   at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_51]

                                    [Server:WILDFLYNODE1]   at java.lang.Thread.run(Thread.java:744) [rt.jar:1.7.0_51]

                                     

                                    And also these started happening shortly afterwards:

                                     

                                    [Server:WILDFLYNODE1] 15:30:55,081 ERROR [io.undertow.servlet.request] (default task-283) UT015005: Error invoking method requestDestroyed on listener class org.springframework.web.context.request.RequestContextListener: java.lang.IllegalStateException: Transaction DummyTransaction{xid=DummyXid{id=11996}, status=1} is not in a valid state to be invoking cache operations on.

                                    etc...

                                    1 2 3 Previous Next