5 Replies Latest reply on Nov 27, 2017 8:31 AM by shekark

    wildfly 10 server infinispan cluster not able to transfer the data to standalone JVM application

    shekark

      Issue : We are getting the below exception in the standalone JVM application when it gets the data from the wildfly 10 server infinispan cluster.

       

      Exception :

       

      Logs in the standalone  JVM application

       

      main, setSoTimeout(0) called

      main, WRITE: TLSv1.2 Application Data, length = 400

      main, READ: TLSv1.2 Application Data, length = 336

      main, READ: TLSv1.2 Application Data, length = 752

      main, READ: TLSv1.2 Application Data, length = 48

      main, setSoTimeout(0) called

      main, WRITE: TLSv1.2 Application Data, length = 432

      main, READ: TLSv1.2 Application Data, length = 400

      main, READ: TLSv1.2 Application Data, length = 112

      Aug 16, 2017 10:07:18 PM org.jgroups.blocks.RequestCorrelator dispatch

      SEVERE: JGRP000225: failed unmarshalling buffer into return value

      java.io.StreamCorruptedException: Unexpected byte found when reading an object: 9

        at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:754)

        at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:209)

        at org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:41)

        at org.infinispan.commons.marshall.jboss.AbstractJBossMarshaller.objectFromObjectStream(AbstractJBossMarshaller.java:134)

        at org.infinispan.marshall.core.VersionAwareMarshaller.objectFromByteBuffer(VersionAwareMarshaller.java:101)

        at org.infinispan.commons.marshall.AbstractDelegatingMarshaller.objectFromByteBuffer(AbstractDelegatingMarshaller.java:80)

        at org.infinispan.remoting.transport.jgroups.MarshallerAdapter.objectFromBuffer(MarshallerAdapter.java:28)

        at org.jgroups.blocks.RequestCorrelator.dispatch(RequestCorrelator.java:419)

        at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:357)

        at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:245)

        at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:664)

        at org.jgroups.JChannel.up(JChannel.java:738)

        at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1030)

        at org.jgroups.protocols.FRAG2.up(FRAG2.java:165)

        at org.jgroups.protocols.FlowControl.up(FlowControl.java:390)

        at org.jgroups.protocols.pbcast.GMS.up(GMS.java:1040)

        at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:234)

        at org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1070)

        at org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:785)

        at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:426)

        at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:638)

        at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:155)

        at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:200)

        at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:310)

        at org.jgroups.protocols.MERGE3.up(MERGE3.java:285)

        at org.jgroups.protocols.Discovery.up(Discovery.java:296)

        at org.jgroups.protocols.TP.passMessageUp(TP.java:1601)

        at org.jgroups.protocols.TP$SingleMessageHandler.run(TP.java:1817)

        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

        at java.lang.Thread.run(Thread.java:748)

       

       

      Aug 16, 2017 10:07:18 PM org.jgroups.blocks.RequestCorrelator dispatch

      SEVERE: JGRP000225: failed unmarshalling buffer into return value

      java.io.StreamCorruptedException: Unexpected byte found when reading an object: 9

        at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:754)

        at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:209)

        at org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:41)

        at org.infinispan.commons.marshall.jboss.AbstractJBossMarshaller.objectFromObjectStream(AbstractJBossMarshaller.java:134)

        at org.infinispan.marshall.core.VersionAwareMarshaller.objectFromByteBuffer(VersionAwareMarshaller.java:101)

        at org.infinispan.commons.marshall.AbstractDelegatingMarshaller.objectFromByteBuffer(AbstractDelegatingMarshaller.java:80)

        at org.infinispan.remoting.transport.jgroups.MarshallerAdapter.objectFromBuffer(MarshallerAdapter.java:28)

        at org.jgroups.blocks.RequestCorrelator.dispatch(RequestCorrelator.java:419)

        at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:357)

        at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:245)

        at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:664)

        at org.jgroups.JChannel.up(JChannel.java:738)

        at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1030)

        at org.jgroups.protocols.FRAG2.up(FRAG2.java:165)

        at org.jgroups.protocols.FlowControl.up(FlowControl.java:390)

        at org.jgroups.protocols.pbcast.GMS.up(GMS.java:1040)

        at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:234)

        at org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1070)

        at org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:785)

        at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:426)

        at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:638)

        at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:155)

        at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:200)

        at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:310)

        at org.jgroups.protocols.MERGE3.up(MERGE3.java:285)

        at org.jgroups.protocols.Discovery.up(Discovery.java:296)

        at org.jgroups.protocols.TP.passMessageUp(TP.java:1601)

        at org.jgroups.protocols.TP$SingleMessageHandler.run(TP.java:1817)

        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

        at java.lang.Thread.run(Thread.java:748)

       

       

      Scenarion : Started the wildfly 10 server with infinispan in clustered mode and started the standalone JVM application which joined the cluster with wildfly 10 server and getting the above error when the cache data transmitting, But the cache transfter working fine between Wildfly and Wildfly, Standalone JVM and Standalone application.

       

      Platform details :

       

      Java : 1.8.0_131

      CentOS release 6.8 (Final)

      wildfly-10.1.0.Final

        • 1. Re: wildfly 10 server infinispan cluster not able to transfer the data to standalone JVM application
          pferraro

          WildFly configures Infinispan with a specific marshalling configuration, which is very likely incompatible with the way you've configured Infinispan in your standalone client.

          From what you've described, your standalone application should probably be configured to access the Infinispan cache on WildFly via Infinispan's RemoteCache mechanism.  To do this, you'll need to install the requisite endpoint subsystem from infinispan-server: infinispan/server/integration/endpoint at 8.2.x · infinispan/infinispan · GitHub

          This will allow your standalone client to access the remote cache via the hotrod protocol.

          • 2. Re: wildfly 10 server infinispan cluster not able to transfer the data to standalone JVM application
            shekark

            Hi pamorim72

             

            Thank you very much for your comments.....

             

            Below is the elaboration of the scenario.

             

            At present we have 3 standalone application instances and 13 Jboss server instances and all are joining in the same infinispan cluster.

             

            Below are the stack we are using in Production systems currently.

               

            Application Serverjboss-as-7.1.1.Final
            Java Version1.7.0_80
            Infinispan-core-release5.2.1.Final
            Jgroups3.2.8.Final

               

            And  planning to migrate to the wildfly and try to use the same structure (3 standalone instances, 13 wildfly instances with infinispan clustered mode).

             

            New Stack :

               

            Application Serverwildfly-10.1.0.Final
            Java Version1.8.0_131
            Infinispan-core-release9.1.0.Final
            Jgroups3.6.10.Final
            OSCentOS release 6.8 (Final)

             

             

             

            Please find the Example code and standalone-ha.xml for the wildfly server uploaded  https://github.com/kshkrreddy/infinispan-cluster-tutorial

             

            Steps followed to reproduce the error :

             

            1. Started the standalone application and it started the cluster

            2. Started the Wildfly server and deployed the application and it is trying to join and get the data from the standalone application infinispan cluster and getting the below errors.

             

            Wildfly error :

             

            2017-08-20 18:04:15,433 WARN  [org.infinispan.topology.ClusterTopologyManagerImpl] (MSC service thread 1-3) ISPN000329: Unable to read rebalancing status from coordinator localhost-35814: org.infinispan.util.concurrent.TimeoutException: Replication timeout for localhost-35814

             

            Standalone app Error :

             

            Caused by: java.io.StreamCorruptedException: Unexpected byte found when reading an object: 9

             

            Below are the bit more logs :

             

             

            [main][PF2-[2017-08-20 17:53:30,128][DEBUG][org.infinispan.commons.util.ServiceFinder][main]: Loading service impl: Parser72

            [main][PF2-[2017-08-20 17:53:30,132][DEBUG][org.infinispan.commons.util.ServiceFinder][main]: Loading service impl: Parser80

            [main][PF2-[2017-08-20 17:53:30,133][DEBUG][org.infinispan.commons.util.ServiceFinder][main]: Loading service impl: Parser72

            [main][PF2-[2017-08-20 17:53:30,133][DEBUG][org.infinispan.commons.util.ServiceFinder][main]: Loading service impl: Parser80

            [main][PF2-[2017-08-20 17:53:30,360][DEBUG][org.infinispan.commons.util.ServiceFinder][main]: No service impls found: ModuleLifecycle

            [main][PF2-[2017-08-20 17:53:30,364][DEBUG][org.infinispan.commons.util.ServiceFinder][main]: No service impls found: ModuleMetadataFileFinder

            [main][PF2-[2017-08-20 17:53:30,471][DEBUG][org.infinispan.commons.util.ServiceFinder][main]: No service impls found: ModuleCommandExtensions

            [main][PF2-[2017-08-20 17:53:30,471][DEBUG][org.infinispan.util.ModuleProperties][main]: No module command extensions to load

            [main][PF2-[2017-08-20 17:53:30,720][INFO ][org.infinispan.remoting.transport.jgroups.JGroupsTransport][main]: ISPN000078: Starting JGroups channel ee

            [main][PF2-[2017-08-20 17:54:14,550][DEBUG][org.infinispan.remoting.transport.jgroups.JGroupsTransport][main]: New view accepted: [localhost-35814|0] (1) [localhost-35814]

            [main][PF2-[2017-08-20 17:54:14,560][INFO ][org.infinispan.remoting.transport.jgroups.JGroupsTransport][main]: ISPN000094: Received new cluster view for channel ee: [localhost-35814|0] (1) [localhost-35814]

            [main][PF2-[2017-08-20 17:54:14,581][INFO ][org.infinispan.remoting.transport.jgroups.JGroupsTransport][main]: ISPN000079: Channel ee local address is localhost-35814, physical addresses are [127.0.0.1:53282]

            [transport-thread--p4-t1][PF2-[2017-08-20 17:54:14,590][DEBUG][org.infinispan.topology.ClusterTopologyManagerImpl][transport-thread--p4-t1]: Recovering cluster status for view 0

            [main][PF2-[2017-08-20 17:54:14,591][INFO ][org.infinispan.factories.GlobalComponentRegistry][main]: ISPN000128: Infinispan version: Infinispan 'Chakra' 8.2.4.Final

            [main][PF2-[2017-08-20 17:54:14,595][DEBUG][org.infinispan.manager.DefaultCacheManager][main]: Started cache manager ee on null

            [transport-thread--p4-t1][PF2-[2017-08-20 17:54:14,597][DEBUG][org.infinispan.topology.LocalTopologyManagerImpl][transport-thread--p4-t1]: Sending cluster status response for view 0

            [transport-thread--p4-t1][PF2-[2017-08-20 17:54:14,597][DEBUG][org.infinispan.topology.ClusterTopologyManagerImpl][transport-thread--p4-t1]: Got 1 status responses. members are [localhost-35814]

            [main][PF2-[2017-08-20 17:54:14,690][DEBUG][org.infinispan.commons.util.ServiceFinder][main]: No service impls found: TypeConverter

            [main][PF2-[2017-08-20 17:54:14,810][DEBUG][org.infinispan.commons.util.ServiceFinder][main]: No service impls found: FilterIndexingServiceProvider

            [main][PF2-[2017-08-20 17:54:14,820][DEBUG][org.infinispan.interceptors.InterceptorChain][main]: Interceptor chain size: 9

            [main][PF2-[2017-08-20 17:54:14,827][DEBUG][org.infinispan.interceptors.InterceptorChain][main]: Interceptor chain is:

              >> org.infinispan.interceptors.distribution.DistributionBulkInterceptor

              >> org.infinispan.interceptors.InvocationContextInterceptor

              >> org.infinispan.interceptors.compat.TypeConverterInterceptor

              >> org.infinispan.interceptors.CacheMgmtInterceptor

              >> org.infinispan.statetransfer.StateTransferInterceptor

              >> org.infinispan.interceptors.locking.NonTransactionalLockingInterceptor

              >> org.infinispan.interceptors.EntryWrappingInterceptor

              >> org.infinispan.interceptors.distribution.NonTxDistributionInterceptor

              >> org.infinispan.interceptors.CallInterceptor

            [main][PF2-[2017-08-20 17:54:14,832][DEBUG][org.infinispan.jmx.JmxUtil][main]: Object name org.infinispan:type=Cache,name="weather(repl_sync)",manager="DefaultCacheManager",component=Cache already registered

            [main][PF2-[2017-08-20 17:54:14,836][DEBUG][org.infinispan.topology.LocalTopologyManagerImpl][main]: Node localhost-35814 joining cache weather

            [main][PF2-[2017-08-20 17:54:14,847][DEBUG][org.infinispan.topology.ClusterTopologyManagerImpl][main]: Updating cluster-wide stable topology for cache weather, topology = CacheTopology{id=1, rebalanceId=1, currentCH=ReplicatedConsistentHash{ns = 256, owners = (1)[localhost-35814: 256]}, pendingCH=null, unionCH=null, actualMembers=[localhost-35814]}

            [main][PF2-[2017-08-20 17:54:14,849][DEBUG][org.infinispan.topology.ClusterCacheStatus][main]: Queueing rebalance for cache weather with members [localhost-35814]

            [main][PF2-[2017-08-20 17:54:14,855][DEBUG][org.infinispan.topology.LocalTopologyManagerImpl][main]: Updating local topology for cache weather: CacheTopology{id=1, rebalanceId=1, currentCH=ReplicatedConsistentHash{ns = 256, owners = (1)[localhost-35814: 256]}, pendingCH=null, unionCH=null, actualMembers=[localhost-35814]}

            [main][PF2-[2017-08-20 17:54:14,857][DEBUG][org.infinispan.statetransfer.StateConsumerImpl][main]: Adding inbound state transfer for segments [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255] of cache weather

            [main][PF2-[2017-08-20 17:54:14,858][DEBUG][org.infinispan.statetransfer.StateConsumerImpl][main]: Removing no longer owned entries for cache weather

            [main][PF2-[2017-08-20 17:54:14,859][DEBUG][org.infinispan.cache.impl.CacheImpl][main]: Started cache weather on localhost-35814

            [ViewHandler,ee,localhost-35814][PF2-[2017-08-20 18:03:14,432][DEBUG][org.infinispan.remoting.transport.jgroups.JGroupsTransport][ViewHandler,ee,localhost-35814]: New view accepted: [localhost-35814|1] (2) [localhost-35814, shekark]

            [ViewHandler,ee,localhost-35814][PF2-[2017-08-20 18:03:14,433][DEBUG][org.infinispan.remoting.transport.jgroups.JGroupsTransport][ViewHandler,ee,localhost-35814]: Joined: [shekark], Left: []

            [ViewHandler,ee,localhost-35814][PF2-[2017-08-20 18:03:14,433][INFO ][org.infinispan.remoting.transport.jgroups.JGroupsTransport][ViewHandler,ee,localhost-35814]: ISPN000094: Received new cluster view for channel ee: [localhost-35814|1] (2) [localhost-35814, shekark]

            [main][PF2-[2017-08-20 18:03:15,274][ERROR][org.infinispan.interceptors.InvocationContextInterceptor][main]: ISPN000136: Error executing command PutKeyValueCommand, writing keys [Rome]

            org.infinispan.remoting.RemoteException: ISPN000217: Received exception from shekark, see cause for remote stack trace

              at org.infinispan.remoting.transport.jgroups.JGroupsTransport.checkRsp(JGroupsTransport.java:793)

              at org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$invokeRemotelyAsync$1(JGroupsTransport.java:642)

              at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)

              at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)

              at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)

              at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)

              at org.infinispan.remoting.transport.jgroups.RspListFuture.futureDone(RspListFuture.java:31)

              at org.jgroups.blocks.Request.checkCompletion(Request.java:152)

              at org.jgroups.blocks.GroupRequest.receiveResponse(GroupRequest.java:116)

              at org.jgroups.blocks.RequestCorrelator.dispatch(RequestCorrelator.java:427)

              at org.jgroups.blocks.RequestCorrelator.receiveMessage(RequestCorrelator.java:357)

              at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:245)

              at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.up(MessageDispatcher.java:664)

              at org.jgroups.JChannel.up(JChannel.java:738)

              at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1030)

              at org.jgroups.protocols.pbcast.STATE_TRANSFER.up(STATE_TRANSFER.java:146)

              at org.jgroups.protocols.RSVP.up(RSVP.java:201)

              at org.jgroups.protocols.FRAG2.up(FRAG2.java:165)

              at org.jgroups.protocols.FlowControl.up(FlowControl.java:390)

              at org.jgroups.protocols.FlowControl.up(FlowControl.java:374)

              at org.jgroups.protocols.pbcast.GMS.up(GMS.java:1040)

              at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:234)

              at org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1070)

              at org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:785)

              at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:426)

              at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:649)

              at org.jgroups.protocols.BARRIER.up(BARRIER.java:152)

              at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:155)

              at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:200)

              at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:310)

              at org.jgroups.protocols.MERGE3.up(MERGE3.java:285)

              at org.jgroups.protocols.Discovery.up(Discovery.java:296)

              at org.jgroups.protocols.TP.passMessageUp(TP.java:1601)

              at org.jgroups.protocols.TP$SingleMessageHandler.run(TP.java:1817)

              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

              at java.lang.Thread.run(Thread.java:748)

            Caused by: java.io.StreamCorruptedException: Unexpected byte found when reading an object: 9

              at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:754)

              at org.jboss.marshalling.river.RiverUnmarshaller.doReadObject(RiverUnmarshaller.java:209)

              at org.jboss.marshalling.AbstractObjectInput.readObject(AbstractObjectInput.java:41)

              at org.infinispan.commons.marshall.jboss.AbstractJBossMarshaller.objectFromObjectStream(AbstractJBossMarshaller.java:134)

              at org.infinispan.marshall.core.VersionAwareMarshaller.objectFromByteBuffer(VersionAwareMarshaller.java:101)

              at org.infinispan.commons.marshall.AbstractDelegatingMarshaller.objectFromByteBuffer(AbstractDelegatingMarshaller.java:80)

              at org.infinispan.remoting.transport.jgroups.MarshallerAdapter.objectFromBuffer(MarshallerAdapter.java:28)

              at org.jgroups.blocks.RequestCorrelator.dispatch(RequestCorrelator.java:419)

              ... 27 more

             

             

            Wildfly application server infinispan logs :

             

            2017-08-20 18:04:09,437 DEBUG [org.infinispan.topology.ClusterTopologyManagerImpl] (MSC service thread 1-8) Timed out waiting for rebalancing status from coordinator, trying again

            2017-08-20 18:04:09,437 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (MSC service thread 1-8) dests=[localhost-35814], command=CacheTopologyControlCommand{cache=null, type=POLICY_GET_STATUS, sender=shekark, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=-1}, mode=SYNCHRONOUS, timeout=6000

            2017-08-20 18:04:09,437 TRACE [org.infinispan.remoting.transport.jgroups.CommandAwareRpcDispatcher] (MSC service thread 1-8) Replication task sending CacheTopologyControlCommand{cache=null, type=POLICY_GET_STATUS, sender=shekark, joinInfo=null, topologyId=0, rebalanceId=0, currentCH=null, pendingCH=null, availabilityMode=null, actualMembers=null, throwable=null, viewId=-1} to single recipient localhost-35814 with response mode GET_ALL

            2017-08-20 18:04:09,437 TRACE [org.infinispan.commons.marshall.AdaptiveBufferSizePredictor] (MSC service thread 1-8) Next predicted buffer size for object type 'org.infinispan.topology.CacheTopologyControlCommand' will be 384

            2017-08-20 18:04:09,437 TRACE [org.infinispan.marshall.core.VersionAwareMarshaller] (MSC service thread 1-8) Wrote version 510

            2017-08-20 18:04:09,437 TRACE [org.infinispan.commons.marshall.jboss.AbstractJBossMarshaller] (MSC service thread 1-8) Stop marshaller

            2017-08-20 18:04:09,437 TRACE [org.jgroups.blocks.RequestCorrelator] (MSC service thread 1-8) shekark: invoking unicast RPC [req-id=40] on localhost-35814

            2017-08-20 18:04:09,437 TRACE [org.jgroups.protocols.UNICAST3] (MSC service thread 1-8) shekark --> DATA(localhost-35814: #43, conn_id=0)

            2017-08-20 18:04:09,437 TRACE [org.jgroups.protocols.UDP] (MSC service thread 1-8) shekark: sending msg to localhost-35814, src=shekark, headers are RequestCorrelator: corr_id=200, type=REQ, req_id=40, rsp_expected=true, FORK: ee:ejb, UNICAST3: DATA, seqno=43, TP: [cluster_name=ee]

            2017-08-20 18:04:09,437 TRACE [org.jgroups.protocols.UFC] (MSC service thread 1-8) shekark used 11 credits, 1999560 remaining

            2017-08-20 18:04:09,437 TRACE [org.jgroups.protocols.UDP] (TransferQueueBundler,ee,shekark) shekark: sending 1 msgs (96 bytes (480.00% of max_bundle_size) to 1 dests(s): [ee:localhost-35814]

            2017-08-20 18:04:09,438 TRACE [org.jgroups.protocols.UDP] (thread-19,ee,shekark) shekark: received [dst: shekark, src: localhost-35814 (3 headers), size=7 bytes, flags=OOB|DONT_BUNDLE|NO_TOTAL_ORDER], headers are RequestCorrelator: corr_id=200, type=RSP, req_id=40, rsp_expected=true, UNICAST3: DATA, seqno=42, TP: [cluster_name=ee]

            2017-08-20 18:04:09,438 TRACE [org.jgroups.protocols.UNICAST3] (thread-19,ee,shekark) shekark <-- DATA(localhost-35814: #42, conn_id=0)

            2017-08-20 18:04:09,438 TRACE [org.jgroups.protocols.UNICAST3] (thread-19,ee,shekark) shekark: delivering localhost-35814#42

            2017-08-20 18:04:09,438 TRACE [org.jgroups.protocols.UFC] (thread-19,ee,shekark) localhost-35814 used 7 credits, 1999709 remaining

            2017-08-20 18:04:09,606 TRACE [org.jgroups.protocols.UNICAST3] (thread-3,ee,shekark) shekark --> ACK(localhost-35814: #42)

            2017-08-20 18:04:09,606 TRACE [org.jgroups.protocols.UDP] (thread-3,ee,shekark) shekark: sending msg to localhost-35814, src=shekark, headers are UNICAST3: ACK, seqno=42, ts=11, TP: [cluster_name=ee]

            2017-08-20 18:04:09,606 TRACE [org.jgroups.protocols.UDP] (TransferQueueBundler,ee,shekark) shekark: sending 1 msgs (58 bytes (290.00% of max_bundle_size) to 1 dests(s): [ee:localhost-35814]

            2017-08-20 18:04:09,754 TRACE [org.jgroups.protocols.UDP] (thread-2,ee,shekark) shekark: received [dst: shekark, src: localhost-35814 (2 headers), size=0 bytes, flags=INTERNAL], headers are UNICAST3: ACK, seqno=43, ts=12, TP: [cluster_name=ee]

            2017-08-20 18:04:09,754 TRACE [org.jgroups.protocols.UNICAST3] (thread-2,ee,shekark) shekark <-- ACK(localhost-35814: #43, conn-id=0, ts=12)

            2017-08-20 18:04:10,433 TRACE [org.jgroups.protocols.UDP] (thread-1,ee,shekark) shekark: received [dst: <null>, src: localhost-35814 (2 headers), size=0 bytes, flags=INTERNAL], headers are FD_ALL: heartbeat, TP: [cluster_name=ee]

            2017-08-20 18:04:14,453 TRACE [org.jgroups.protocols.UDP] (thread-2,ee,shekark) shekark: sending msg to null, src=shekark, headers are FD_ALL: heartbeat, TP: [cluster_name=ee]

            2017-08-20 18:04:14,453 TRACE [org.jgroups.protocols.UDP] (thread-2,ee,shekark) shekark: looping back message [dst: <null>, src: shekark (2 headers), size=0 bytes, flags=INTERNAL]

            2017-08-20 18:04:14,453 TRACE [org.jgroups.protocols.UDP] (thread-2,ee,shekark) shekark: received [dst: <null>, src: shekark (2 headers), size=0 bytes, flags=INTERNAL], headers are FD_ALL: heartbeat, TP: [cluster_name=ee]

            2017-08-20 18:04:14,453 TRACE [org.jgroups.protocols.UDP] (TransferQueueBundler,ee,shekark) shekark: sending 1 msgs (34 bytes (170.00% of max_bundle_size) to 1 dests(s): [ee]

            2017-08-20 18:04:14,568 TRACE [org.jgroups.protocols.pbcast.STABLE] (thread-2,ee,shekark) shekark: sending stable msg to localhost-35814: localhost-35814: [2], shekark: [0]

            2017-08-20 18:04:14,568 TRACE [org.jgroups.protocols.UDP] (thread-2,ee,shekark) shekark: sending msg to localhost-35814, src=shekark, headers are STABLE: [STABLE_GOSSIP] view-id= [localhost-35814|1], TP: [cluster_name=ee]

            2017-08-20 18:04:14,569 TRACE [org.jgroups.protocols.UDP] (TransferQueueBundler,ee,shekark) shekark: sending 1 msgs (117 bytes (585.00% of max_bundle_size) to 1 dests(s): [ee:localhost-35814]

            2017-08-20 18:04:15,346 TRACE [org.jgroups.protocols.pbcast.STABLE] (thread-3,ee,shekark) shekark: sending stable msg to localhost-35814: localhost-35814: [2], shekark: [0]

            2017-08-20 18:04:15,346 TRACE [org.jgroups.protocols.UDP] (thread-3,ee,shekark) shekark: sending msg to localhost-35814, src=shekark, headers are STABLE: [STABLE_GOSSIP] view-id= [localhost-35814|1], TP: [cluster_name=ee]

            2017-08-20 18:04:15,346 TRACE [org.jgroups.protocols.UDP] (TransferQueueBundler,ee,shekark) shekark: sending 1 msgs (117 bytes (585.00% of max_bundle_size) to 1 dests(s): [ee:localhost-35814]

            2017-08-20 18:04:15,432 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (timeout-thread--p8-t1) Response: sender=localhost-35814, received=false, suspected=false

            2017-08-20 18:04:15,432 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (timeout-thread--p7-t1) Response: sender=localhost-35814, received=false, suspected=false

            2017-08-20 18:04:15,433 WARN  [org.infinispan.topology.ClusterTopologyManagerImpl] (MSC service thread 1-3) ISPN000329: Unable to read rebalancing status from coordinator localhost-35814: org.infinispan.util.concurrent.TimeoutException: Replication timeout for localhost-35814

              at org.infinispan.remoting.transport.jgroups.JGroupsTransport.checkRsp(JGroupsTransport.java:801)

              at org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$invokeRemotelyAsync$0(JGroupsTransport.java:629)

              at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)

              at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)

              at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)

              at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)

              at org.infinispan.remoting.transport.jgroups.SingleResponseFuture.call(SingleResponseFuture.java:46)

              at org.infinispan.remoting.transport.jgroups.SingleResponseFuture.call(SingleResponseFuture.java:17)

              at java.util.concurrent.FutureTask.run(FutureTask.java:266)

              at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)

              at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)

              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

              at java.lang.Thread.run(Thread.java:748)

             

             

            2017-08-20 18:04:15,433 WARN  [org.infinispan.topology.ClusterTopologyManagerImpl] (MSC service thread 1-4) ISPN000329: Unable to read rebalancing status from coordinator localhost-35814: org.infinispan.util.concurrent.TimeoutException: Replication timeout for localhost-35814

              at org.infinispan.remoting.transport.jgroups.JGroupsTransport.checkRsp(JGroupsTransport.java:801)

              at org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$invokeRemotelyAsync$0(JGroupsTransport.java:629)

              at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)

              at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)

              at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)

              at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)

              at org.infinispan.remoting.transport.jgroups.SingleResponseFuture.call(SingleResponseFuture.java:46)

              at org.infinispan.remoting.transport.jgroups.SingleResponseFuture.call(SingleResponseFuture.java:17)

              at java.util.concurrent.FutureTask.run(FutureTask.java:266)

              at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)

              at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)

              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

              at java.lang.Thread.run(Thread.java:748)

             

             

            2017-08-20 18:04:15,433 DEBUG [org.infinispan.topology.ClusterTopologyManagerImpl] (MSC service thread 1-3) Timed out waiting for rebalancing status from coordinator, trying again

            2017-08-20 18:04:15,433 DEBUG [org.infinispan.topology.ClusterTopologyManagerImpl] (MSC service thread 1-4) Timed out waiting for rebalancing status from coordinator, trying again

            2017-08-20 18:04:15,434 TRACE [org.infinispan.factories.GlobalComponentRegistry] (MSC service thread 1-4) Invoking start method org.infinispan.factories.components.ComponentMetadata$PrioritizedMethodMetadata@4f717d9b on component org.infinispan.topology.LocalTopologyManager

            2017-08-20 18:04:15,434 TRACE [org.infinispan.factories.GlobalComponentRegistry] (MSC service thread 1-3) Invoking start method org.infinispan.factories.components.ComponentMetadata$PrioritizedMethodMetadata@8fb72e on component org.infinispan.topology.LocalTopologyManager

            2017-08-20 18:04:15,434 TRACE [org.infinispan.topology.LocalTopologyManagerImpl] (MSC service thread 1-3) Starting LocalTopologyManager on shekark

            2017-08-20 18:04:15,434 TRACE [org.infinispan.topology.LocalTopologyManagerImpl] (MSC service thread 1-4) Starting LocalTopologyManager on shekark

            2017-08-20 18:04:15,434 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (timeout-thread--p10-t1) Response: sender=localhost-35814, received=false, suspected=false

            2017-08-20 18:04:15,434 WARN  [org.infinispan.topology.ClusterTopologyManagerImpl] (MSC service thread 1-5) ISPN000329: Unable to read rebalancing status from coordinator localhost-35814: org.infinispan.util.concurrent.TimeoutException: Replication timeout for localhost-35814

              at org.infinispan.remoting.transport.jgroups.JGroupsTransport.checkRsp(JGroupsTransport.java:801)

              at org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$invokeRemotelyAsync$0(JGroupsTransport.java:629)

              at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)

              at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)

              at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)

              at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)

              at org.infinispan.remoting.transport.jgroups.SingleResponseFuture.call(SingleResponseFuture.java:46)

              at org.infinispan.remoting.transport.jgroups.SingleResponseFuture.call(SingleResponseFuture.java:17)

              at java.util.concurrent.FutureTask.run(FutureTask.java:266)

              at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)

              at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)

              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

              at java.lang.Thread.run(Thread.java:748)

             

             

            2017-08-20 18:04:15,434 TRACE [org.infinispan.factories.GlobalComponentRegistry] (MSC service thread 1-4) Not registering a shutdown hook.  Configured behavior = DONT_REGISTER

            2017-08-20 18:04:15,434 TRACE [org.infinispan.factories.GlobalComponentRegistry] (MSC service thread 1-3) Not registering a shutdown hook.  Configured behavior = DONT_REGISTER

            2017-08-20 18:04:15,434 INFO  [org.infinispan.factories.GlobalComponentRegistry] (MSC service thread 1-4) ISPN000128: Infinispan version: Infinispan 'Chakra' 8.2.4.Final

            2017-08-20 18:04:15,434 DEBUG [org.infinispan.topology.ClusterTopologyManagerImpl] (MSC service thread 1-5) Timed out waiting for rebalancing status from coordinator, trying again

            2017-08-20 18:04:15,434 TRACE [org.infinispan.factories.GlobalComponentRegistry] (MSC service thread 1-5) Invoking start method org.infinispan.factories.components.ComponentMetadata$PrioritizedMethodMetadata@622d9a46 on component org.infinispan.topology.LocalTopologyManager

            2017-08-20 18:04:15,435 TRACE [org.infinispan.topology.LocalTopologyManagerImpl] (MSC service thread 1-5) Starting LocalTopologyManager on shekark

            2017-08-20 18:04:15,435 TRACE [org.infinispan.factories.GlobalComponentRegistry] (MSC service thread 1-5) Not registering a shutdown hook.  Configured behavior = DONT_REGISTER

            2017-08-20 18:04:15,437 TRACE [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (timeout-thread--p12-t1) Response: sender=localhost-35814, received=false, suspected=false

            2017-08-20 18:04:15,438 DEBUG [org.infinispan.manager.DefaultCacheManager] (MSC service thread 1-3) Started cache manager server on null

            2017-08-20 18:04:15,441 DEBUG [org.infinispan.manager.DefaultCacheManager] (MSC service thread 1-5) Started cache manager web on null

            2017-08-20 18:04:15,450 DEBUG [org.infinispan.manager.DefaultCacheManager] (MSC service thread 1-4) Started cache manager hibernate on null

            2017-08-20 18:04:15,457 INFO  [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (ServerService Thread Pool -- 74) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated.

            2017-08-20 18:04:15,458 INFO  [org.infinispan.configuration.cache.EvictionConfigurationBuilder] (ServerService Thread Pool -- 74) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated.

            2017-08-20 18:04:15,448 WARN  [org.infinispan.topology.ClusterTopologyManagerImpl] (MSC service thread 1-8) ISPN000329: Unable to read rebalancing status from coordinator localhost-35814: org.infinispan.util.concurrent.TimeoutException: Replication timeout for localhost-35814

              at org.infinispan.remoting.transport.jgroups.JGroupsTransport.checkRsp(JGroupsTransport.java:801)

              at org.infinispan.remoting.transport.jgroups.JGroupsTransport.lambda$invokeRemotelyAsync$0(JGroupsTransport.java:629)

              at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)

              at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)

              at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)

              at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)

              at org.infinispan.remoting.transport.jgroups.SingleResponseFuture.call(SingleResponseFuture.java:46)

              at org.infinispan.remoting.transport.jgroups.SingleResponseFuture.call(SingleResponseFuture.java:17)

              at java.util.concurrent.FutureTask.run(FutureTask.java:266)

              at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)

              at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)

              at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

              at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

              at java.lang.Thread.run(Thread.java:748)

            • 3. Re: wildfly 10 server infinispan cluster not able to transfer the data to standalone JVM application
              shekark

              Hi pferraro

               

              We are not trying to start the standalone app as the client to the wildfly, If we start the standalone app first then it will check is there any cluster already exist or not if not exist it will load the data from DB into the infinispan, otherwise it will join the existing cluster and get the data.

              If we start the wildfly server as the 2 instance with same cluster name it has to join the existing standalone cluster and get the data from it.

              • 4. Re: wildfly 10 server infinispan cluster not able to transfer the data to standalone JVM application
                pferraro

                I'm not sure what was unclear from my first response.  You *cannot* simply start an Infinispan cache and join the same cluster as a WildFly instance without configuring it in a compatible way.  I still don't understand your use case - specifically why a standlone client would ever try to join a server managed Infinispan replicated/distributed cache.  From where does this cached data originate?  What cache mode are you using?

                • 5. Re: wildfly 10 server infinispan cluster not able to transfer the data to standalone JVM application
                  shekark

                  Thank you very much for the response pferraro.

                   

                   

                   

                  We decided to go with remote caching and for temporary fix we are starting the infinispan of the wildfly as library mode instead of the wildfly subsystem and it is working fine as here both wildfly and standlone application are in library mode.

                   

                  Do you see any issues with this approach.