0 Replies Latest reply on Mar 1, 2015 3:43 PM by frank_s

    Infinispan HotRod server stuck under heavy connect load, not servicing requests

    frank_s Newbie

      I am working on a product that has many applications running on several servers.  They share caches using a cluster of 6 stand-alone Infinispan servers.  The applications access the cache using a java HotRod RemoteCacheManager.

       

      We have been performing some resilience tests and managed to render more than one Infinispan server inoperable.  It will accept new HotRod connections, but will not service them.  This is causing the clients to time out and disconnect, then reconnect, over and over again, the net result being a ton of CLOSE_WAIT connections.  When the TCP connections are checked using a tool like netstat, we can see that there is data on the receive queue for the HotRod server port (11222) that is never drained. Since these sockets are never read, they stay in this half-open CLOSE_WAIT state indefinitely. 

       

      To double check the server’s responsiveness, I telnet to port 11222 and enter some nonsense. Normally the server would send back an error message, but when in this bad state, it doesn’t respond at all.

       

      Before I describe the situation in more detail, here is some information about our setup:

      • Infinispan server version 7.1.1.  Also tested 7.0.2 and witnessed the same behavior.
      • 6 servers, distributed caches, 2 owners.

       

      I have attached a copy of our clustered.xml file.  There is also a stack trace attached (more on that later).

       

       

      The test we have been performing is to isolate one of the servers as if there was a network outage. The isolated server hosts some of our applications and one of the Infinispan servers.  We have been using iptables to drop all traffic, with the exception of ssh so we can still observe it.  We use these iptables commands:

       

      iptables -F

      iptables -X

      iptables -P INPUT DROP

      iptables -P OUTPUT DROP

      iptables -P FORWARD DROP

      iptables -A INPUT -i lo -j ACCEPT

      iptables -A OUTPUT -o lo -j ACCEPT

      iptables -A INPUT -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT

      iptables -A OUTPUT -p tcp --sport 22 --dport 513:65535 -m state --state ESTABLISHED -j ACCEPT

      iptables -A INPUT -j DROP

      iptables -A OUTPUT -j DROP

       

       

       

      When the testing begins, we see some error messages in the log.  I believe that these are expected given the nature of our test:

       

      ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (remote-thread--p3-t187) ISPN000136: Execution error: org.infinispan.remoting.transport.jgroups.SuspectException: One or more nodes have left the cluster while replicating command SingleRpcCommand{cacheName='cacheTest', command=PutKeyValueCommand{key=[B0x033e1f6d6f6e6974..[34], value=[B0x034c0000014bc2c4..[10], flags=[IGNORE_RETURN_VALUES], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedLifespanExpirableMetadata{lifespan=86400000, version=NumericVersion{version=17732932122709415}}, successful=true}}

      at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:533) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:283) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.interceptors.distribution.BaseDistributionInterceptor.handleNonTxWriteCommand(BaseDistributionInterceptor.java:236) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.interceptors.distribution.NonTxDistributionInterceptor.visitPutKeyValueCommand(NonTxDistributionInterceptor.java:93) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:71) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.interceptors.EntryWrappingInterceptor.invokeNextAndApplyChanges(EntryWrappingInterceptor.java:386) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.interceptors.EntryWrappingInterceptor.setSkipRemoteGetsAndInvokeNextForDataCommand(EntryWrappingInterceptor.java:474) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.interceptors.EntryWrappingInterceptor.visitPutKeyValueCommand(EntryWrappingInterceptor.java:187) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:71) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.interceptors.locking.AbstractLockingInterceptor.visitPutKeyValueCommand(AbstractLockingInterceptor.java:48) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:71) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:111) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:35) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:71) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.statetransfer.StateTransferInterceptor.handleNonTxWriteCommand(StateTransferInterceptor.java:179) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.statetransfer.StateTransferInterceptor.visitPutKeyValueCommand(StateTransferInterceptor.java:108) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:71) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.interceptors.CacheMgmtInterceptor.updateStoreStatistics(CacheMgmtInterceptor.java:159) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.interceptors.CacheMgmtInterceptor.visitPutKeyValueCommand(CacheMgmtInterceptor.java:145) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:71) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:102) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:71) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:35) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:71) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:333) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.commands.remote.BaseRpcInvokingCommand.processVisitableCommand(BaseRpcInvokingCommand.java:39) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.commands.remote.SingleRpcCommand.perform(SingleRpcCommand.java:48) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.remoting.inboundhandler.BasePerCacheInboundInvocationHandler.invokePerform(BasePerCacheInboundInvocationHandler.java:84) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.remoting.inboundhandler.BaseBlockingRunnable.run(BaseBlockingRunnable.java:31) [infinispan-core.jar:7.1.1.Final]

      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_51]

      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_51]

      at java.lang.Thread.run(Thread.java:744) [rt.jar:1.7.0_51]

      02-25 17:04:21,106 WARN [org.infinispan.remoting.inboundhandler.NonTotalOrderPerCacheInboundInvocationHandler] (remote-thread--p3-t187) ISPN000071: Caught exception when handling command SingleRpcCommand{cacheName='cacheTest', command=PutKeyValueCommand{key=[B0x033e1f6d6f6e6974..[34], value=[B0x034c0000014bc2c4..[10], flags=[IGNORE_RETURN_VALUES], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedLifespanExpirableMetadata{lifespan=86400000, version=NumericVersion{version=17732932122709415}}, successful=true}}: org.infinispan.remoting.transport.jgroups.SuspectException: One or more nodes have left the cluster while replicating command SingleRpcCommand{cacheName='cacheTest', command=PutKeyValueCommand{key=[B0x033e1f6d6f6e6974..[34], value=[B0x034c0000014bc2c4..[10], flags=[IGNORE_RETURN_VALUES], putIfAbsent=false, valueMatcher=MATCH_ALWAYS, metadata=EmbeddedLifespanExpirableMetadata{lifespan=86400000, version=NumericVersion{version=17732932122709415}}, successful=true}}

      at org.infinispan.remoting.transport.jgroups.JGroupsTransport.invokeRemotely(JGroupsTransport.java:533) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.remoting.rpc.RpcManagerImpl.invokeRemotely(RpcManagerImpl.java:283) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.interceptors.distribution.BaseDistributionInterceptor.handleNonTxWriteCommand(BaseDistributionInterceptor.java:236) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.interceptors.distribution.NonTxDistributionInterceptor.visitPutKeyValueCommand(NonTxDistributionInterceptor.java:93) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:71) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.interceptors.EntryWrappingInterceptor.invokeNextAndApplyChanges(EntryWrappingInterceptor.java:386) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.interceptors.EntryWrappingInterceptor.setSkipRemoteGetsAndInvokeNextForDataCommand(EntryWrappingInterceptor.java:474) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.interceptors.EntryWrappingInterceptor.visitPutKeyValueCommand(EntryWrappingInterceptor.java:187) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:71) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.interceptors.locking.AbstractLockingInterceptor.visitPutKeyValueCommand(AbstractLockingInterceptor.java:48) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:71) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.interceptors.base.CommandInterceptor.handleDefault(CommandInterceptor.java:111) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:35) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:71) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.statetransfer.StateTransferInterceptor.handleNonTxWriteCommand(StateTransferInterceptor.java:179) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.statetransfer.StateTransferInterceptor.visitPutKeyValueCommand(StateTransferInterceptor.java:108) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:71) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.interceptors.CacheMgmtInterceptor.updateStoreStatistics(CacheMgmtInterceptor.java:159) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.interceptors.CacheMgmtInterceptor.visitPutKeyValueCommand(CacheMgmtInterceptor.java:145) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:71) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:97) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:102) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:71) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.commands.AbstractVisitor.visitPutKeyValueCommand(AbstractVisitor.java:35) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.commands.write.PutKeyValueCommand.acceptVisitor(PutKeyValueCommand.java:71) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:333) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.commands.remote.BaseRpcInvokingCommand.processVisitableCommand(BaseRpcInvokingCommand.java:39) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.commands.remote.SingleRpcCommand.perform(SingleRpcCommand.java:48) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.remoting.inboundhandler.BasePerCacheInboundInvocationHandler.invokePerform(BasePerCacheInboundInvocationHandler.java:84) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.remoting.inboundhandler.BaseBlockingRunnable.run(BaseBlockingRunnable.java:31) [infinispan-core.jar:7.1.1.Final]

      at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_51]

      at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_51]

      at java.lang.Thread.run(Thread.java:744) [rt.jar:1.7.0_51]

       

       

      During the simulated outage, many of the applications begin to attempt reconnects.  Some applications allow the RemoteCacheManager to handle recovery, and may throttle work for a while, but some are not as well behaved as others, and may stop and discard their RemoteCacheManager, then set up a new one in an attempt to recover, possibly multiple times.  This seems to lead to lots of new connections being created and started.  During this time, some of the Infinispan servers will stop responding.

       

       

      The only way to recover the server is to restart it.  One big clue occurred when we stopped the server.  We noticed that just as the server is being stopped, it spits out  a sudden flood of exceptions like this one:

       

      ERROR [org.infinispan.server.hotrod.HotRodDecoder] (HotRodServerWorker-3-60) ISPN005003: Exception reported: org.infinispan.IllegalLifecycleStateException: ISPN000323: Cache 'dedup' is in 'TERMINATED' state and so it does not accept new invocations. Either restart it or recreate the cache container.

      at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:89) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.interceptors.InvocationContextInterceptor.handleDefault(InvocationContextInterceptor.java:71) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.commands.AbstractVisitor.visitSizeCommand(AbstractVisitor.java:72) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.commands.read.SizeCommand.acceptVisitor(SizeCommand.java:35) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:333) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.cache.impl.CacheImpl.size(CacheImpl.java:367) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.cache.impl.CacheImpl.size(CacheImpl.java:362) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.cache.impl.AbstractDelegatingCache.size(AbstractDelegatingCache.java:269) [infinispan-core.jar:7.1.1.Final]

      at org.infinispan.server.hotrod.Decoder2x$.customReadHeader(Decoder2x.scala:274) [infinispan-server-hotrod.jar:7.1.1.Final]

      at org.infinispan.server.hotrod.HotRodDecoder.customDecodeHeader(HotRodDecoder.scala:153) [infinispan-server-hotrod.jar:7.1.1.Final]

      at org.infinispan.server.core.AbstractProtocolDecoder.org$infinispan$server$core$AbstractProtocolDecoder$$decodeHeader(AbstractProtocolDecoder.scala:137) [infinispan-server-core.jar:7.1.1.Final]

      at org.infinispan.server.core.AbstractProtocolDecoder.decodeDispatch(AbstractProtocolDecoder.scala:70) [infinispan-server-core.jar:7.1.1.Final]

      at org.infinispan.server.core.AbstractProtocolDecoder.decode(AbstractProtocolDecoder.scala:61) [infinispan-server-core.jar:7.1.1.Final]

      at io.netty.handler.codec.ReplayingDecoder.callDecode(ReplayingDecoder.java:362) [netty-all-4.0.20.Final.jar:4.0.20.Final]

      at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:149) [netty-all-4.0.20.Final.jar:4.0.20.Final]

      at org.infinispan.server.core.AbstractProtocolDecoder.channelRead(AbstractProtocolDecoder.scala:459) [infinispan-server-core.jar:7.1.1.Final]

      at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:332) [netty-all-4.0.20.Final.jar:4.0.20.Final]

      at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:318) [netty-all-4.0.20.Final.jar:4.0.20.Final]

      at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:787) [netty-all-4.0.20.Final.jar:4.0.20.Final]

      at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:125) [netty-all-4.0.20.Final.jar:4.0.20.Final]

      at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:507) [netty-all-4.0.20.Final.jar:4.0.20.Final]

      at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:464) [netty-all-4.0.20.Final.jar:4.0.20.Final]

      at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:378) [netty-all-4.0.20.Final.jar:4.0.20.Final]

      at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:350) [netty-all-4.0.20.Final.jar:4.0.20.Final]

      at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) [netty-all-4.0.20.Final.jar:4.0.20.Final]

      at java.lang.Thread.run(Thread.java:744) [rt.jar:1.7.0_51]

       

       

      This seems to indicate that there were tasks that were blocking the whole time, then on shutdown, they were cleaned up.  We took a thread dump from another server that was stuck in the same way.  We found that there are 160 threads blocking on java.util.concurrent.FutureTask.awaitDone (See stack trace snippet below).  This number exactly matches the number of configured hotrod worker threads.  We gave the servers plenty of time to timeout, but they never seemed to do so. We believe that these 160 threads are all waiting, and so cannot service any new connections, including connections that have already been closed by the client side.  So the half-closed TCP sockets pile up on the server.

       

      It appears to be calling get() on the FutureTask, which I believe blocks forever.  I would have expected to see get(long, TimeUnit) instead. Perhaps there is some other mechanism that is supposed to come along and interrupt or cancel the tasks, but it doesn’t seem to be doing so.

       

      Thread 25410: (state = BLOCKED)

      - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information may be imprecise)

      - java.util.concurrent.locks.LockSupport.park(java.lang.Object) @bci=14, line=186 (Compiled frame)

      - java.util.concurrent.FutureTask.awaitDone(boolean, long) @bci=165, line=425 (Compiled frame)

      - java.util.concurrent.FutureTask.get() @bci=13, line=187 (Compiled frame)

      - org.infinispan.distexec.mapreduce.MapReduceTask$TaskPart.get() @bci=5, line=1048 (Compiled frame)

      - org.infinispan.distexec.mapreduce.MapReduceTask.executeMapPhaseWithLocalReduction(java.util.Map) @bci=419, line=663 (Compiled frame)

      - org.infinispan.distexec.mapreduce.MapReduceTask.executeHelper(java.lang.String) @bci=394, line=501 (Interpreted frame)

      - org.infinispan.distexec.mapreduce.MapReduceTask.execute() @bci=2, line=414 (Compiled frame)

      - org.infinispan.distexec.mapreduce.MapReduceTask.execute(org.infinispan.distexec.mapreduce.Collator) @bci=1, line=833 (Compiled frame)

      - org.infinispan.server.hotrod.util.BulkUtil.getAllKeys(org.infinispan.Cache, int) @bci=92, line=43 (Compiled frame)

      - org.infinispan.server.hotrod.AbstractEncoder1x.writeResponse(org.infinispan.server.hotrod.Response, io.netty.buffer.ByteBuf, org.infinispan.manager.EmbeddedCacheManager, org.infinispan.server.hotrod.HotRodServer) @bci=705, line=111 (Compiled frame)

      - org.infinispan.server.hotrod.HotRodEncoder.encode(io.netty.channel.ChannelHandlerContext, java.lang.Object, java.util.List) @bci=221, line=41 (Compiled frame)

      - io.netty.handler.codec.MessageToMessageEncoder.write(io.netty.channel.ChannelHandlerContext, java.lang.Object, io.netty.channel.ChannelPromise) @bci=25, line=89 (Compiled frame)

      - io.netty.channel.AbstractChannelHandlerContext.invokeWrite(java.lang.Object, io.netty.channel.ChannelPromise) @bci=10, line=657 (Compiled frame)

      - io.netty.channel.AbstractChannelHandlerContext.write(java.lang.Object, boolean, io.netty.channel.ChannelPromise) @bci=27, line=715 (Compiled frame)

      - io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(java.lang.Object, io.netty.channel.ChannelPromise) @bci=34, line=705 (Compiled frame)

      - io.netty.channel.AbstractChannelHandlerContext.writeAndFlush(java.lang.Object) @bci=6, line=740 (Compiled frame)

      - io.netty.channel.DefaultChannelPipeline.writeAndFlush(java.lang.Object) @bci=5, line=895 (Compiled frame)

      - io.netty.channel.AbstractChannel.writeAndFlush(java.lang.Object) @bci=5, line=241 (Compiled frame)

      - org.infinispan.server.core.AbstractProtocolDecoder.writeResponse(io.netty.channel.Channel, java.lang.Object) @bci=193, line=220 (Compiled frame)

      - org.infinispan.server.hotrod.HotRodDecoder.customDecodeKey(io.netty.channel.ChannelHandlerContext, io.netty.buffer.ByteBuf) @bci=42, line=156 (Interpreted frame)

      - org.infinispan.server.core.AbstractProtocolDecoder.org$infinispan$server$core$AbstractProtocolDecoder$$decodeKey(io.netty.channel.ChannelHandlerContext, io.netty.buffer.ByteBuf, org.infinispan.server.core.DecoderState) @bci=331, line=159 (Compiled frame)

      - org.infinispan.server.core.AbstractProtocolDecoder.decodeDispatch(io.netty.channel.ChannelHandlerContext, io.netty.buffer.ByteBuf, java.util.List) @bci=90, line=71 (Compiled frame)

      - org.infinispan.server.core.AbstractProtocolDecoder.decode(io.netty.channel.ChannelHandlerContext, io.netty.buffer.ByteBuf, java.util.List) @bci=21, line=61 (Compiled frame)

      - io.netty.handler.codec.ReplayingDecoder.callDecode(io.netty.channel.ChannelHandlerContext, io.netty.buffer.ByteBuf, java.util.List) @bci=53, line=362 (Compiled frame)

      - io.netty.handler.codec.ByteToMessageDecoder.channelRead(io.netty.channel.ChannelHandlerContext, java.lang.Object) @bci=116, line=149 (Compiled frame)

      - org.infinispan.server.core.AbstractProtocolDecoder.channelRead(io.netty.channel.ChannelHandlerContext, java.lang.Object) @bci=17, line=459 (Compiled frame)

      - io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(java.lang.Object) @bci=9, line=332 (Compiled frame)

      - io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(java.lang.Object) @bci=35, line=318 (Compiled frame)

      - io.netty.channel.DefaultChannelPipeline.fireChannelRead(java.lang.Object) @bci=5, line=787 (Compiled frame)

      - io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read() @bci=172, line=125 (Compiled frame)

      - io.netty.channel.nio.NioEventLoop.processSelectedKey(java.nio.channels.SelectionKey, io.netty.channel.nio.AbstractNioChannel) @bci=42, line=507 (Compiled frame)

      - io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(java.nio.channels.SelectionKey[]) @bci=37, line=464 (Compiled frame)

      - io.netty.channel.nio.NioEventLoop.processSelectedKeys() @bci=15, line=378 (Compiled frame)

      - io.netty.channel.nio.NioEventLoop.run() @bci=86, line=350 (Compiled frame)

      - io.netty.util.concurrent.SingleThreadEventExecutor$2.run() @bci=13, line=116 (Interpreted frame)

      - java.lang.Thread.run() @bci=11, line=744 (Interpreted frame)

       

      I have attached a copy of the full thread dump.

       

       

      Is this a problem with the server, or with my configuration?  Have I overlooked something that would prevent this?