12 Replies Latest reply: Sep 19, 2013 10:49 AM by Dan Berindei RSS

    Infinispan cluster with Invalidation

    Andre Locher Newbie

      I'm currently trying to run an infinispan cluster (5.1.4.FINAL) with two machines using invalidation. Right now I'm trying to run two processes on the same machine that should send invalidation messages to each other but I cant get them to see each other and properly send messages through.

       

      As soon as the second process starts up they start sending messages but both of them drop messages with the error that the sender is not in table. I tried the McastSenderTest/McastReceiverTest which seems to work. When having a McastReceiverTest running I seem to get the updates from the infinispan nodes as well although it looks a bit strange. Anyone know what I'm doing wrong here?

       

      Node 1:

      10:07:18,148 INFO  [JGroupsTransport] ISPN000078: Starting JGroups Channel

      10:07:18,805 WARN  [UDP] send buffer of socket java.net.DatagramSocket@359ba2ce was set to 640KB, but the OS only allocated 131.07KB. This might lead to performance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux)

      10:07:18,805 WARN  [UDP] receive buffer of socket java.net.DatagramSocket@359ba2ce was set to 20MB, but the OS only allocated 131.07KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)

      10:07:18,805 WARN  [UDP] send buffer of socket java.net.MulticastSocket@7cdd9de0 was set to 640KB, but the OS only allocated 131.07KB. This might lead to performance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux)

      10:07:18,805 WARN  [UDP] receive buffer of socket java.net.MulticastSocket@7cdd9de0 was set to 25MB, but the OS only allocated 131.07KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)

      10:07:21,842 INFO  [JGroupsTransport] ISPN000094: Received new cluster view: [pbapp6-59135|0] [pbapp6-59135]

      10:07:21,843 INFO  [JGroupsTransport] ISPN000079: Cache local address is pbapp6-59135, physical addresses are [fe80:0:0:0:216:3eff:fe5a:3f76%3:32830]

      10:07:21,853 INFO  [GlobalComponentRegistry] ISPN000128: Infinispan version: Infinispan 'Brahma' 5.1.4.FINAL

      10:18:38,465 WARN  [NAKACK] pbapp6-59135: dropped message 1 from pbapp6-49654 (sender not in table [pbapp6-59135]), view=[pbapp6-59135|0] [pbapp6-59135]

      10:18:38,847 WARN  [NAKACK] pbapp6-59135: dropped message 2 from pbapp6-49654 (sender not in table [pbapp6-59135]), view=[pbapp6-59135|0] [pbapp6-59135]

      10:18:39,475 WARN  [NAKACK] pbapp6-59135: dropped message 3 from pbapp6-49654 (sender not in table [pbapp6-59135]), view=[pbapp6-59135|0] [pbapp6-59135]

      10:18:39,648 WARN  [NAKACK] pbapp6-59135: dropped message 4 from pbapp6-49654 (sender not in table [pbapp6-59135]), view=[pbapp6-59135|0] [pbapp6-59135]

      10:18:41,567 WARN  [NAKACK] pbapp6-59135: dropped message 5 from pbapp6-49654 (sender not in table [pbapp6-59135]), view=[pbapp6-59135|0] [pbapp6-59135]

      ....

       

       

      Node 2:

      10:18:27,831 INFO  [JGroupsTransport] ISPN000078: Starting JGroups Channel

      10:18:28,461 WARN  [UDP] send buffer of socket java.net.DatagramSocket@7cdd9de0 was set to 640KB, but the OS only allocated 131.07KB. This might lead to performance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux)

      10:18:28,461 WARN  [UDP] receive buffer of socket java.net.DatagramSocket@7cdd9de0 was set to 20MB, but the OS only allocated 131.07KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)

      10:18:28,462 WARN  [UDP] send buffer of socket java.net.MulticastSocket@4c130f9f was set to 640KB, but the OS only allocated 131.07KB. This might lead to performance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux)

      10:18:28,462 WARN  [UDP] receive buffer of socket java.net.MulticastSocket@4c130f9f was set to 25MB, but the OS only allocated 131.07KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)

      10:18:31,490 INFO  [JGroupsTransport] ISPN000094: Received new cluster view: [pbapp6-49654|0] [pbapp6-49654]

      10:18:31,492 INFO  [JGroupsTransport] ISPN000079: Cache local address is pbapp6-49654, physical addresses are [fe80:0:0:0:216:3eff:fe5a:3f76%3:32832]

      10:18:31,501 INFO  [GlobalComponentRegistry] ISPN000128: Infinispan version: Infinispan 'Brahma' 5.1.4.FINAL

      10:18:36,984 WARN  [NAKACK] pbapp6-49654: dropped message 245 from 724282bc-9cac-33a8-511a-d94c444222f4 (sender not in table [pbapp6-49654]), view=[pbapp6-49654|0] [pbapp6-49654]

      10:18:37,258 WARN  [NAKACK] pbapp6-49654: dropped message 246 from 724282bc-9cac-33a8-511a-d94c444222f4 (sender not in table [pbapp6-49654]), view=[pbapp6-49654|0] [pbapp6-49654]

      10:18:41,934 WARN  [UDP] pbapp6-49654: no physical address for 724282bc-9cac-33a8-511a-d94c444222f4, dropping message

      10:18:44,906 WARN  [UDP] pbapp6-49654: no physical address for 724282bc-9cac-33a8-511a-d94c444222f4, dropping message

      10:18:46,491 WARN  [NAKACK] pbapp6-49654: dropped message 247 from 724282bc-9cac-33a8-511a-d94c444222f4 (sender not in table [pbapp6-49654]), view=[pbapp6-49654|0] [pbapp6-49654]

      10:18:46,827 WARN  [NAKACK] pbapp6-49654: dropped message 248 from 724282bc-9cac-33a8-511a-d94c444222f4 (sender not in table [pbapp6-49654]), view=[pbapp6-49654|0] [pbapp6-49654]

      10:18:49,431 WARN  [NAKACK] pbapp6-49654: dropped message 249 from 724282bc-9cac-33a8-511a-d94c444222f4 (sender not in table [pbapp6-49654]), view=[pbapp6-49654|0] [pbapp6-49654]

      10:18:49,903 WARN  [NAKACK] pbapp6-49654: dropped message 250 from 724282bc-9cac-33a8-511a-d94c444222f4 (sender not in table [pbapp6-49654]), view=[pbapp6-49654|0] [pbapp6-49654]

      ....

       

       

      McastSenderTest:

      -bash-3.1$ java -cp jgroups-3.0.9.Final.jar org.jgroups.tests.McastSenderTest -m                     cast_addr 228.6.7.8 -port 46655

      Socket #1=0.0.0.0/0.0.0.0:46655, ttl=32, bind interface=/fe80:0:0:0:216:3eff:fe5                     a:3f76%3

      Socket #2=0.0.0.0/0.0.0.0:46655, ttl=32, bind interface=/192.168.6.102

      Socket #3=0.0.0.0/0.0.0.0:46655, ttl=32, bind interface=/0:0:0:0:0:0:0:1%1

      Socket #4=0.0.0.0/0.0.0.0:46655, ttl=32, bind interface=/127.0.0.1

      > << Received packet from fe80:0:0:0:216:3eff:fe5a:3f76%3:46655:        çd'°µÑö6                     ÿòçd'°µÑö6ÿò§ª8ISPN

      << Received packet from fe80:0:0:0:216:3eff:fe5a:3f76%3:46655:  çd'°µÑö6ÿòçd'°µÑ                     ö6ÿò§ª8ISPN

      << Received packet from fe80:0:0:0:216:3eff:fe5a:3f76%3:46655:  çd'°µÑö6ÿòçd'°µÑ                     ö6ÿò§ª8ISPN

      << Received packet from fe80:0:0:0:216:3eff:fe5a:3f76%3:46655:  çd'°µÑö6ÿòçd'°µÑ                     ö6ÿò§ª8ISPN

      <8ISPNeived packet from fe80:0:0:0:216:3eff:fe5a:3f76%3:46655:  ¬!Àµ{   V !Àµ{ V                     <8ISPNeived packet from fe80:0:0:0:216:3eff:fe5a:3f76%3:46655:  ¬!Àµ{   V !Àµ{ V                     <8ISPNeived packet from fe80:0:0:0:216:3eff:fe5a:3f76%3:46655:  ¬!Àµ{   V !Àµ{ V                     <8ISPNeived packet from fe80:0:0:0:216:3eff:fe5a:3f76%3:46655:  ¬!Àµ{   V !Àµ{ V                     << Received packet from fe80:0:0:0:216:3eff:fe5a:3f76%3:46655:  ¬!Àµ{   V !Àµ{ V                      8ISPN

        • 1. Re: Infinispan cluster with Invalidation
          Dmitry Udalov Newbie

          That sounds like JGroup configuration problem. Nodes didn't find each other. Node 1 knows only about itself:

                 Received new cluster view: [pbapp6-59135|0] [pbapp6-59135]

          So the Node 2:

                 Received new cluster view: [pbapp6-49654|0] [pbapp6-49654]

           

          It's supposed to be like:  [pbapp6-59135|0] [pbapp6-59135, pbapp6-49654]

           

          Is it related to misconfigured UDP send buffer? I'm not sure.

          • 2. Re: Infinispan cluster with Invalidation
            Dmitry Udalov Newbie

            I actually have another question about Invalidation mode. Do I get it right that after cached object is updated on one of the cluster nodes then the other nodes should eventually retrieve the updated data from the store? Can the object be replicated from the node where it was modified? I'm trying to avoid expensive DB queries for complex objects and reuse updated cache object on other nodes when it's requested. Something similar to replicated mode, but only "on demand". Can it be achieved?

             

            And one more. Are the objects on other nodes invalidated automatically or I should use @CacheEntryInvalidated listener to do it from the client?

            Any links to best practices for invalidated cache?

             

            Thanks,

            Dmitry

            • 3. Re: Infinispan cluster with Invalidation
              Andre Locher Newbie

              I'm using the out of the box jgroups-udp.xml configuration file. I'm not sure what you mean by send buffer being wrong. Anyone else got an idea why they are not able to find each other but are able to send messages to each other (resulting in a drop though).

              • 4. Re: Infinispan cluster with Invalidation
                Galder Zamarreño Master

                Try jgroups-tcp.xml

                • 5. Re: Infinispan cluster with Invalidation
                  Martin Gencur Novice

                  I was getting a similar error lately and the following snippet being added to jgroups stack helped:

                  <protocol type="PING">
                      <property name="timeout">5000</property>
                      <property name="num_initial_members">5</property>
                  </protocol>
                  
                  • 6. Re: Infinispan cluster with Invalidation
                    Martin Gencur Novice

                    Btw, you can achieve lazy loading in replicated mode by disabling state-transfer and using Cluster cache loader. Invalidation mode needs a cache store, indeed.

                    • 7. Re: Infinispan cluster with Invalidation
                      Dmitry Udalov Newbie

                      Hi Martin,

                       

                      Where do you suggest to add that PING element, not to jgroups configuration file, doesn't it? In jgroups xml-file there is a different element already:

                      <PING timeout="5000" num_initial_members="5"/>

                      • 8. Re: Infinispan cluster with Invalidation
                        Martin Gencur Novice

                        Ah, I meant this one, sorry:) What I actually used was a config for JBoss AS 7. Do you really use PING with timeout 5000? I'm seeing a config file at https://github.com/infinispan/infinispan/blob/9ba7cf53c164b07c6111357f2b21913ea0dff982/core/src/main/resources/jgroups-udp.xml#L62 and it has only 3000. So you've already changed this?

                        • 9. Re: Infinispan cluster with Invalidation
                          Dan Berindei Apprentice

                          @Andre, I have a hunch you'd solve your problem if you forced IPv4 with -Djava.net.preferIPv4Stack=true.

                           

                          If it still doesn't work, please attach your JGroups configuration.

                          • 10. Re: Infinispan cluster with Invalidation
                            Andre Locher Newbie

                            Thanks Dan, that seems to have fixed that issue.

                            • 11. Re: Infinispan cluster with Invalidation
                              Viren Balaut Newbie

                              I am also facing the same issue which is mentioned in this post..

                               

                              I am having application which is internally creating REPL cache using EmbeddedCache manager. I am using jgroups-udp with default setting...

                              When I am trying to run on windows, its working fine and cluster is able to form.

                               

                              But when I am trying the same code on linux machine, cluster is not forming.

                              I have tried this setting -Djava.net.preferIPv4Stack=true but seems to be no luck. I have verified that there is no firewall which is blocking....


                              Can you suggesst some way to find out exact issue and resolve that...

                               

                              tomcat 1 log
                              ----------------------------------------------------------------------
                              2013-09-12 07:51:24,480|main|INFO |com.enlighta.cluster.cache.AbstractCacheManager.<init>(AbstractCacheManager.java:61) - Trying to create infinispan cache programatically.
                              2013-09-12 07:51:24,885|main|DEBUG|org.infinispan.util.ModuleProperties.resolveModuleLifecycles(ModuleProperties.java:50) - Loading lifecycle SPI class: org.infinispan.query.impl.LifecycleManager@3352d4
                              2013-09-12 07:51:24,994|main|DEBUG|org.infinispan.util.ModuleProperties.loadModuleCommandHandlers(ModuleProperties.java:79) - Loading module command extension SPI class: org.infinispan.query.QueryModuleCommandExtensions@6e8deccd
                              2013-09-12 07:51:25,026|main|DEBUG|org.infinispan.marshall.core.JBossMarshaller.inject(JBossMarshaller.java:50) - Using JBoss Marshalling
                              2013-09-12 07:51:25,150|main|DEBUG|org.infinispan.manager.DefaultCacheManager.start(DefaultCacheManager.java:688) - Started cache manager ISPN on null
                              2013-09-12 07:51:25,166|main|INFO |com.enlighta.cluster.cache.ClusterCacheService.<init>(ClusterCacheService.java:38) - Trying to create MAP of caches so it can be used by components..
                              2013-09-12 07:51:25,369|main|INFO |org.infinispan.remoting.transport.jgroups.JGroupsTransport.start(JGroupsTransport.java:182) - ISPN000078: Starting JGroups Channel
                              2013-09-12 07:51:25,852|main|WARN |org.jgroups.stack.Configurator.resolveAndAssignField(Configurator.java:1134) - [JGRP00014] TP.discard_incompatible_packets has been deprecated: incompatible packets are discarded anyway
                              2013-09-12 07:51:25,852|main|WARN |org.jgroups.stack.Configurator.resolveAndAssignField(Configurator.java:1134) - [JGRP00014] TP.enable_bundling has been deprecated: will be ignored as bundling is on by default
                              2013-09-12 07:51:25,899|main|WARN |org.jgroups.stack.Configurator.resolveAndInvokePropertyMethod(Configurator.java:1087) - [JGRP00014] UNICAST.setTimeout has been deprecated: not used anymore
                              2013-09-12 07:51:26,086|main|DEBUG|org.jgroups.stack.Configurator.setDefaultValues(Configurator.java:823) - set property UDP.bind_addr to default value /fe80:0:0:0:d572:3236:54a6:1ff0%13
                              2013-09-12 07:51:26,086|main|DEBUG|org.jgroups.stack.Configurator.setDefaultValues(Configurator.java:823) - set property UDP.diagnostics_addr to default value /ff0e:0:0:0:0:0:75:75
                              2013-09-12 07:51:26,227|main|DEBUG|org.jgroups.protocols.FRAG2.up(FRAG2.java:173) - received CONFIG event: {bind_addr=/fe80:0:0:0:d572:3236:54a6:1ff0%13}
                              2013-09-12 07:51:26,258|main|DEBUG|org.jgroups.protocols.UDP.createSockets(UDP.java:323) - sockets will use interface fe80:0:0:0:d572:3236:54a6:1ff0%13
                              2013-09-12 07:51:26,508|main|DEBUG|org.jgroups.protocols.UDP.createSockets(UDP.java:393) - socket information:
                              mcast_addr=ff0e:0:0:0:0:0:e406:708:46655, bind_addr=/fe80:0:0:0:d572:3236:54a6:1ff0%13, ttl=2
                              sock: bound to fe80:0:0:0:d572:3236:54a6:1ff0%13:62204, receive buffer size=20000000, send buffer size=640000
                              mcast_sock: bound to fe80:0:0:0:d572:3236:54a6:1ff0%13:46655, send buffer size=640000, receive buffer size=25000000
                              2013-09-12 07:51:26,523|main|DEBUG|org.jgroups.protocols.pbcast.GMS.down(GMS.java:924) - address=taru_laptop-45266, cluster=ISPN, physical address=fe80:0:0:0:d572:3236:54a6:1ff0%13:62204
                              2013-09-12 07:51:56,584|main|DEBUG|org.jgroups.protocols.pbcast.NAKACK.setDigest(NAKACK.java:1216) -
                              [taru_laptop-45266 setDigest()]
                              existing digest:  []
                              new digest:       taru_laptop-45266: [0 (0)]
                              resulting digest: taru_laptop-45266: [0 (0)]
                              2013-09-12 07:51:56,584|main|DEBUG|org.jgroups.protocols.pbcast.GMS.installView(GMS.java:595) - taru_laptop-45266: installing [taru_laptop-45266|0] [taru_laptop-45266]
                              2013-09-12 07:51:56,584|main|DEBUG|org.jgroups.protocols.pbcast.STABLE.resume(STABLE.java:206) - resuming message garbage collection
                              2013-09-12 07:51:56,584|main|DEBUG|org.jgroups.protocols.FD_SOCK.down(FD_SOCK.java:328) - VIEW_CHANGE received: [taru_laptop-45266]
                              2013-09-12 07:51:56,584|main|DEBUG|org.infinispan.remoting.transport.jgroups.JGroupsTransport.viewAccepted(JGroupsTransport.java:617) - New view accepted: [taru_laptop-45266|0] [taru_laptop-45266]
                              2013-09-12 07:51:56,584|main|INFO |org.infinispan.remoting.transport.jgroups.JGroupsTransport.viewAccepted(JGroupsTransport.java:644) - ISPN000094: Received new cluster view: [taru_laptop-45266|0] [taru_laptop-45266]
                              2013-09-12 07:51:56,600|main|INFO |com.enlighta.cluster.JMSClientListener.observeViewChanged(JMSClientListener.java:34) - View Changed : EventImpl{type=VIEW_CHANGED, newMembers=[taru_laptop-45266], oldMembers=null, localAddress=taru_laptop-45266, viewId=0, subgroupsMerged=null, mergeView=false}
                              2013-09-12 07:51:56,600|main|DEBUG|org.jgroups.protocols.pbcast.STABLE.resume(STABLE.java:206) - resuming message garbage collection
                              2013-09-12 07:51:56,600|main|DEBUG|org.jgroups.protocols.pbcast.ClientGmsImpl.becomeSingletonMember(ClientGmsImpl.java:336) - created group (first member). My view is [taru_laptop-45266|0], impl is org.jgroups.protocols.pbcast.CoordGmsImpl

                              *** After tomcat 2 comes up.. here is view change message

                               

                              2013-09-12 07:52:34,488|ViewHandler,taru_laptop-45266|DEBUG|org.jgroups.protocols.pbcast.STABLE.suspend(STABLE.java:190) - suspending message garbage collection
                              2013-09-12 07:52:34,488|ViewHandler,taru_laptop-45266|DEBUG|org.jgroups.protocols.pbcast.STABLE.startResumeTask(STABLE.java:519) - resume task started, max_suspend_time=33000
                              2013-09-12 07:52:34,488|Incoming-1,taru_laptop-45266|DEBUG|org.jgroups.protocols.pbcast.NAKACK.setDigest(NAKACK.java:1216) -
                              [taru_laptop-45266 setDigest()]
                              existing digest:  taru_laptop-45266: [1 (1)]
                              new digest:       taru_laptop-45266: [0 (0)], taru_laptop-12830: [0 (0)]
                              resulting digest: taru_laptop-12830: [0 (0)], taru_laptop-45266: [1 (1)]
                              2013-09-12 07:52:34,488|Incoming-1,taru_laptop-45266|DEBUG|org.jgroups.protocols.pbcast.GMS.installView(GMS.java:595) - taru_laptop-45266: installing [taru_laptop-45266|1] [taru_laptop-45266, taru_laptop-12830]
                              2013-09-12 07:52:34,503|Incoming-1,taru_laptop-45266|DEBUG|org.jgroups.protocols.FD_SOCK.down(FD_SOCK.java:328) - VIEW_CHANGE received: [taru_laptop-45266, taru_laptop-12830]
                              2013-09-12 07:52:34,503|FD_SOCK pinger,taru_laptop-45266|DEBUG|org.jgroups.protocols.FD_SOCK.run(FD_SOCK.java:386) - ping_dest is taru_laptop-12830, pingable_mbrs=[taru_laptop-45266, taru_laptop-12830]

                               

                               

                              tomcat 2 log
                              ------------------------------------------------------------
                              2013-09-12 07:52:33,162|main|INFO |org.infinispan.remoting.transport.jgroups.JGroupsTransport.start(JGroupsTransport.java:182) - ISPN000078: Starting JGroups Channel
                              2013-09-12 07:52:33,614|main|WARN |org.jgroups.stack.Configurator.resolveAndAssignField(Configurator.java:1134) - [JGRP00014] TP.discard_incompatible_packets has been deprecated: incompatible packets are discarded anyway
                              2013-09-12 07:52:33,614|main|WARN |org.jgroups.stack.Configurator.resolveAndAssignField(Configurator.java:1134) - [JGRP00014] TP.enable_bundling has been deprecated: will be ignored as bundling is on by default
                              2013-09-12 07:52:33,661|main|WARN |org.jgroups.stack.Configurator.resolveAndInvokePropertyMethod(Configurator.java:1087) - [JGRP00014] UNICAST.setTimeout has been deprecated: not used anymore
                              2013-09-12 07:52:33,864|main|DEBUG|org.jgroups.stack.Configurator.setDefaultValues(Configurator.java:823) - set property UDP.bind_addr to default value /fe80:0:0:0:d572:3236:54a6:1ff0%13
                              2013-09-12 07:52:33,864|main|DEBUG|org.jgroups.stack.Configurator.setDefaultValues(Configurator.java:823) - set property UDP.diagnostics_addr to default value /ff0e:0:0:0:0:0:75:75
                              2013-09-12 07:52:33,989|main|DEBUG|org.jgroups.protocols.FRAG2.up(FRAG2.java:173) - received CONFIG event: {bind_addr=/fe80:0:0:0:d572:3236:54a6:1ff0%13}
                              2013-09-12 07:52:34,020|main|DEBUG|org.jgroups.protocols.UDP.createSockets(UDP.java:323) - sockets will use interface fe80:0:0:0:d572:3236:54a6:1ff0%13
                              2013-09-12 07:52:34,332|main|DEBUG|org.jgroups.protocols.UDP.createSockets(UDP.java:393) - socket information:
                              mcast_addr=ff0e:0:0:0:0:0:e406:708:46655, bind_addr=/fe80:0:0:0:d572:3236:54a6:1ff0%13, ttl=2
                              sock: bound to fe80:0:0:0:d572:3236:54a6:1ff0%13:61344, receive buffer size=20000000, send buffer size=640000
                              mcast_sock: bound to fe80:0:0:0:d572:3236:54a6:1ff0%13:46655, send buffer size=640000, receive buffer size=25000000
                              2013-09-12 07:52:34,347|main|DEBUG|org.jgroups.protocols.pbcast.GMS.down(GMS.java:924) - address=taru_laptop-12830, cluster=ISPN, physical address=fe80:0:0:0:d572:3236:54a6:1ff0%13:61344
                              2013-09-12 07:52:34,410|main|DEBUG|org.jgroups.protocols.pbcast.ClientGmsImpl.determineCoord(ClientGmsImpl.java:298) - election results: {taru_laptop-45266=1}
                              2013-09-12 07:52:34,410|main|DEBUG|org.jgroups.protocols.pbcast.ClientGmsImpl.joinInternal(ClientGmsImpl.java:133) - sending JOIN(taru_laptop-12830) to taru_laptop-45266
                              2013-09-12 07:52:34,987|main|DEBUG|org.jgroups.protocols.pbcast.NAKACK.setDigest(NAKACK.java:1216) -
                              [taru_laptop-12830 setDigest()]
                              existing digest:  []
                              new digest:       taru_laptop-45266: [0 (0)], taru_laptop-12830: [0 (0)]
                              resulting digest: taru_laptop-12830: [0 (0)], taru_laptop-45266: [0 (0)]
                              2013-09-12 07:52:34,987|main|DEBUG|org.jgroups.protocols.pbcast.GMS.installView(GMS.java:595) - taru_laptop-12830: installing [taru_laptop-45266|1] [taru_laptop-45266, taru_laptop-12830]
                              2013-09-12 07:52:35,003|main|DEBUG|org.jgroups.protocols.FD_SOCK.down(FD_SOCK.java:328) - VIEW_CHANGE received: [taru_laptop-45266, taru_laptop-12830]
                              2013-09-12 07:52:35,003|FD_SOCK pinger,taru_laptop-12830|DEBUG|org.jgroups.protocols.FD_SOCK.run(FD_SOCK.java:386) - ping_dest is taru_laptop-45266, pingable_mbrs=[taru_laptop-45266, taru_laptop-12830]
                              2013-09-12 07:52:35,003|main|DEBUG|org.infinispan.remoting.transport.jgroups.JGroupsTransport.viewAccepted(JGroupsTransport.java:617) - New view accepted: [taru_laptop-45266|1] [taru_laptop-45266, taru_laptop-12830]
                              2013-09-12 07:52:35,003|main|INFO |org.infinispan.remoting.transport.jgroups.JGroupsTransport.viewAccepted(JGroupsTransport.java:644) - ISPN000094: Received new cluster view: [taru_laptop-45266|1] [taru_laptop-45266, taru_laptop-12830]
                              2013-09-12 07:52:35,018|main|INFO |com.enlighta.cluster.JMSClientListener.observeViewChanged(JMSClientListener.java:34) - View Changed : EventImpl{type=VIEW_CHANGED, newMembers=[taru_laptop-45266, taru_laptop-12830], oldMembers=null, localAddress=taru_laptop-12830, viewId=1, subgroupsMerged=null, mergeView=false}
                              2013-09-12 07:52:35,018|main|INFO |org.infinispan.remoting.transport.jgroups.JGroupsTransport.startJGroupsChannelIfNeeded(JGroupsTransport.java:220) - ISPN000079: Cache local address is taru_laptop-12830, physical addresses are [fe80:0:0:0:d572:3236:54a6:1ff0%13:61344]
                              2013-09-12

                               

                               

                               

                               

                              Here is the logs where​ tomcat nodes start on linux server and they are not able to join cluster.

                               

                              ​ 

                              t​omcat 1 logs
                              -----------------------------------------------------------------------------------

                                  2013-09-12 07:39:31,785|main|INFO  - ISPN000078: Starting JGroups Channel
                                  2013-09-12 07:39:31,903|main|WARN  - [JGRP00014] TP.discard_incompatible_packets has been deprecated: incompatible packets are discarded anyway
                                   2013-09-12 07:39:31,903|main|WARN  - [JGRP00014] TP.enable_bundling has been deprecated: will be ignored as bundling is on by default
                                  2013-09-12 07:39:31,925|main|WARN  - [JGRP00014] UNICAST.setTimeout has been deprecated: not used anymore
                                   2013-09-12 07:39:31,971|main|DEBUG - set property UDP.bind_addr to default value /192.168.0.5
                                  2013-09-12 07:39:31,971|main|DEBUG - set property UDP.diagnostics_addr to default value /224.0.75.75
                                   2013-09-12 07:39:31,997|main|DEBUG - received CONFIG event: {bind_addr=/192.168.0.5}
                                  2013-09-12 07:39:32,007|main|DEBUG - sockets will use interface 192.168.0.5
                                  2013-09-12 07:39:32,008|main|WARN  - [JGRP00015] the send buffer of socket DatagramSocket was set to 640KB, but the OS only allocated 212.99KB. This might lead to performance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux)
                                  2013-09-12 07:39:32,008|main|WARN  - [JGRP00015] the receive buffer of socket DatagramSocket was set to 20MB, but the OS only allocated 212.99KB. This might lead to per
                                   formance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
                                  2013-09-12 07:39:32,009|main|WARN  - [JGRP00015] the send buffer of socket MulticastSocket was set to 640KB, but the OS only allocated 212.99KB. This might lead to perf
                                   ormance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux)
                                  2013-09-12 07:39:32,009|main|WARN  - [JGRP00015] the receive buffer of socket MulticastSocket was set to 25MB, but the OS only allocated 212.99KB. This might lead to pe
                                   rformance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
                                  2013-09-12 07:39:32,010|main|DEBUG - socket information:
                                  mcast_addr=228.6.7.8:46655, bind_addr=/192.168.0.5, ttl=2
                                   sock: bound to 192.168.0.5:33562, receive buffer size=212992, send buffer size=212992
                                  mcast_sock: bound to 192.168.0.5:46655, send buffer size=212992, receive buffer size=212992
                                   2013-09-12 07:39:32,012|main|DEBUG - address=localhost-45145, cluster=ISPN, physical address=192.168.0.5:33562
                                  2013-09-12 07:40:02,030|main|DEBUG -
                                  [localhost-45145 setDigest()]
                                   existing digest:  []
                                  new digest:       localhost-45145: [0 (0)]
                                  resulting digest: localhost-45145: [0 (0)]
                                  2013-09-12 07:40:02,031|main|DEBUG - localhost-45145: installing [localhost-45145|0] [localhost-45145]
                                   2013-09-12 07:40:02,033|main|DEBUG - resuming message garbage collection
                                  2013-09-12 07:40:02,034|main|DEBUG - VIEW_CHANGE received: [localhost-45145]
                                  2013-09-12 07:40:02,035|main|DEBUG - New view accepted: [localhost-45145|0] [localhost-45145]
                                   2013-09-12 07:40:02,036|main|INFO  - ISPN000094: Received new cluster view: [localhost-45145|0] [localhost-45145]
                                  2013-09-12 07:40:02,038|main|INFO  - View Changed : EventImpl{type=VIEW_CHANGED, newMembers=[localhost-45145], oldMembers=null, localAddress=localhost-45145, viewId=0,
                                   subgroupsMerged=null, mergeView=false}
                                  2013-09-12 07:40:02,038|main|DEBUG - resuming message garbage collection
                                  2013-09-12 07:40:02,039|main|DEBUG - created group (first member). My view is [localhost-45145|0], impl is org.jgroups.protocols.pbcast.CoordGmsImpl
                                   2013-09-12 07:40:02,039|main|INFO  - ISPN000079: Cache local address is localhost-45145, physical addresses are [192.168.0.5:33562]

                               

                               

                               

                              tomcat 2 log​

                               

                              ----------------------------------------------------------------------------------------------------


                              ​2013-09-12 07:38:39,740|main|INFO  - Trying to create MAP of caches so it can be used by components..
                              2013-09-12 07:38:39,816|main|INFO  - ISPN000078: Starting JGroups Channel
                              2013-09-12 07:38:39,931|main|WARN  - [JGRP00014] TP.discard_incompatible_packets has been deprecated: incompatible packets are discarded anyway
                              2013-09-12 07:38:39,932|main|WARN  - [JGRP00014] TP.enable_bundling has been deprecated: will be ignored as bundling is on by default
                              2013-09-12 07:38:39,953|main|WARN  - [JGRP00014] UNICAST.setTimeout has been deprecated: not used anymore
                              2013-09-12 07:38:39,998|main|DEBUG - set property UDP.bind_addr to default value /192.168.0.5
                              2013-09-12 07:38:39,998|main|DEBUG - set property UDP.diagnostics_addr to default value /224.0.75.75
                              2013-09-12 07:38:40,025|main|DEBUG - received CONFIG event: {bind_addr=/192.168.0.5}
                              2013-09-12 07:38:40,034|main|DEBUG - sockets will use interface 192.168.0.5
                              2013-09-12 07:38:40,036|main|WARN  - [JGRP00015] the send buffer of socket DatagramSocket was set to 640KB, but the OS only allocated 212.99KB. This might lead to perfo
                              rmance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux)
                              2013-09-12 07:38:40,036|main|WARN  - [JGRP00015] the receive buffer of socket DatagramSocket was set to 20MB, but the OS only allocated 212.99KB. This might lead to per
                              formance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
                              2013-09-12 07:38:40,037|main|WARN  - [JGRP00015] the send buffer of socket MulticastSocket was set to 640KB, but the OS only allocated 212.99KB. This might lead to perf
                              ormance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux)
                              2013-09-12 07:38:40,037|main|WARN  - [JGRP00015] the receive buffer of socket MulticastSocket was set to 25MB, but the OS only allocated 212.99KB. This might lead to pe
                              rformance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
                              2013-09-12 07:38:40,038|main|DEBUG - socket information:
                              mcast_addr=228.6.7.8:46655, bind_addr=/192.168.0.5, ttl=2
                              sock: bound to 192.168.0.5:46484, receive buffer size=212992, send buffer size=212992
                              mcast_sock: bound to 192.168.0.5:46655, send buffer size=212992, receive buffer size=212992
                              2013-09-12 07:38:40,040|main|DEBUG - address=localhost-28168, cluster=ISPN, physical address=192.168.0.5:46484
                              2013-09-12 07:39:10,056|main|DEBUG -
                              [localhost-28168 setDigest()]
                              existing digest:  []
                              new digest:       localhost-28168: [0 (0)]
                              resulting digest: localhost-28168: [0 (0)]
                              2013-09-12 07:39:10,056|main|DEBUG - localhost-28168: installing [localhost-28168|0] [localhost-28168]
                              2013-09-12 07:39:10,057|main|DEBUG - resuming message garbage collection
                              2013-09-12 07:39:10,058|main|DEBUG - VIEW_CHANGE received: [localhost-28168]
                              2013-09-12 07:39:10,060|main|DEBUG - New view accepted: [localhost-28168|0] [localhost-28168]
                              2013-09-12 07:39:10,061|main|INFO  - ISPN000094: Received new cluster view: [localhost-28168|0] [localhost-28168]
                              2013-09-12 07:39:10,063|main|INFO  - View Changed : EventImpl{type=VIEW_CHANGED, newMembers=[localhost-28168], oldMembers=null, localAddress=localhost-28168, viewId=0,

                               

                              subgroupsMerged=null, mergeView=false}
                              2013-09-12 07:39:10,063|main|DEBUG - resuming message garbage collection
                              2013-09-12 07:39:10,064|main|DEBUG - created group (first member). My view is [localhost-28168|0], impl is org.jgroups.protocols.pbcast.CoordGmsImpl
                              2013-09-12 07:39:10,065|main|INFO  - ISPN000079: Cache local address is localhost-28168, physical addresses are [192.168.0.5:46484]
                              2013-09-12 07:39:10,065|main|DEBUG - Waiting on view being accepted

                               

                              • 12. Re: Infinispan cluster with Invalidation
                                Dan Berindei Apprentice

                                Viren, what discovery protocol are you using?

                                 

                                You should definitely enable TRACE logging for org.jgroups when debugging this kind of problem. You should also try to form a cluster just with JGroups: http://www.jgroups.org/manual/html/ch02.html#ItDoesntWork