Infinispan cluster with Invalidation
jrox May 1, 2012 8:47 AMI'm currently trying to run an infinispan cluster (5.1.4.FINAL) with two machines using invalidation. Right now I'm trying to run two processes on the same machine that should send invalidation messages to each other but I cant get them to see each other and properly send messages through.
As soon as the second process starts up they start sending messages but both of them drop messages with the error that the sender is not in table. I tried the McastSenderTest/McastReceiverTest which seems to work. When having a McastReceiverTest running I seem to get the updates from the infinispan nodes as well although it looks a bit strange. Anyone know what I'm doing wrong here?
Node 1:
10:07:18,148 INFO [JGroupsTransport] ISPN000078: Starting JGroups Channel
10:07:18,805 WARN [UDP] send buffer of socket java.net.DatagramSocket@359ba2ce was set to 640KB, but the OS only allocated 131.07KB. This might lead to performance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux)
10:07:18,805 WARN [UDP] receive buffer of socket java.net.DatagramSocket@359ba2ce was set to 20MB, but the OS only allocated 131.07KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
10:07:18,805 WARN [UDP] send buffer of socket java.net.MulticastSocket@7cdd9de0 was set to 640KB, but the OS only allocated 131.07KB. This might lead to performance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux)
10:07:18,805 WARN [UDP] receive buffer of socket java.net.MulticastSocket@7cdd9de0 was set to 25MB, but the OS only allocated 131.07KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
10:07:21,842 INFO [JGroupsTransport] ISPN000094: Received new cluster view: [pbapp6-59135|0] [pbapp6-59135]
10:07:21,843 INFO [JGroupsTransport] ISPN000079: Cache local address is pbapp6-59135, physical addresses are [fe80:0:0:0:216:3eff:fe5a:3f76%3:32830]
10:07:21,853 INFO [GlobalComponentRegistry] ISPN000128: Infinispan version: Infinispan 'Brahma' 5.1.4.FINAL
10:18:38,465 WARN [NAKACK] pbapp6-59135: dropped message 1 from pbapp6-49654 (sender not in table [pbapp6-59135]), view=[pbapp6-59135|0] [pbapp6-59135]
10:18:38,847 WARN [NAKACK] pbapp6-59135: dropped message 2 from pbapp6-49654 (sender not in table [pbapp6-59135]), view=[pbapp6-59135|0] [pbapp6-59135]
10:18:39,475 WARN [NAKACK] pbapp6-59135: dropped message 3 from pbapp6-49654 (sender not in table [pbapp6-59135]), view=[pbapp6-59135|0] [pbapp6-59135]
10:18:39,648 WARN [NAKACK] pbapp6-59135: dropped message 4 from pbapp6-49654 (sender not in table [pbapp6-59135]), view=[pbapp6-59135|0] [pbapp6-59135]
10:18:41,567 WARN [NAKACK] pbapp6-59135: dropped message 5 from pbapp6-49654 (sender not in table [pbapp6-59135]), view=[pbapp6-59135|0] [pbapp6-59135]
....
Node 2:
10:18:27,831 INFO [JGroupsTransport] ISPN000078: Starting JGroups Channel
10:18:28,461 WARN [UDP] send buffer of socket java.net.DatagramSocket@7cdd9de0 was set to 640KB, but the OS only allocated 131.07KB. This might lead to performance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux)
10:18:28,461 WARN [UDP] receive buffer of socket java.net.DatagramSocket@7cdd9de0 was set to 20MB, but the OS only allocated 131.07KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
10:18:28,462 WARN [UDP] send buffer of socket java.net.MulticastSocket@4c130f9f was set to 640KB, but the OS only allocated 131.07KB. This might lead to performance problems. Please set your max send buffer in the OS correctly (e.g. net.core.wmem_max on Linux)
10:18:28,462 WARN [UDP] receive buffer of socket java.net.MulticastSocket@4c130f9f was set to 25MB, but the OS only allocated 131.07KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)
10:18:31,490 INFO [JGroupsTransport] ISPN000094: Received new cluster view: [pbapp6-49654|0] [pbapp6-49654]
10:18:31,492 INFO [JGroupsTransport] ISPN000079: Cache local address is pbapp6-49654, physical addresses are [fe80:0:0:0:216:3eff:fe5a:3f76%3:32832]
10:18:31,501 INFO [GlobalComponentRegistry] ISPN000128: Infinispan version: Infinispan 'Brahma' 5.1.4.FINAL
10:18:36,984 WARN [NAKACK] pbapp6-49654: dropped message 245 from 724282bc-9cac-33a8-511a-d94c444222f4 (sender not in table [pbapp6-49654]), view=[pbapp6-49654|0] [pbapp6-49654]
10:18:37,258 WARN [NAKACK] pbapp6-49654: dropped message 246 from 724282bc-9cac-33a8-511a-d94c444222f4 (sender not in table [pbapp6-49654]), view=[pbapp6-49654|0] [pbapp6-49654]
10:18:41,934 WARN [UDP] pbapp6-49654: no physical address for 724282bc-9cac-33a8-511a-d94c444222f4, dropping message
10:18:44,906 WARN [UDP] pbapp6-49654: no physical address for 724282bc-9cac-33a8-511a-d94c444222f4, dropping message
10:18:46,491 WARN [NAKACK] pbapp6-49654: dropped message 247 from 724282bc-9cac-33a8-511a-d94c444222f4 (sender not in table [pbapp6-49654]), view=[pbapp6-49654|0] [pbapp6-49654]
10:18:46,827 WARN [NAKACK] pbapp6-49654: dropped message 248 from 724282bc-9cac-33a8-511a-d94c444222f4 (sender not in table [pbapp6-49654]), view=[pbapp6-49654|0] [pbapp6-49654]
10:18:49,431 WARN [NAKACK] pbapp6-49654: dropped message 249 from 724282bc-9cac-33a8-511a-d94c444222f4 (sender not in table [pbapp6-49654]), view=[pbapp6-49654|0] [pbapp6-49654]
10:18:49,903 WARN [NAKACK] pbapp6-49654: dropped message 250 from 724282bc-9cac-33a8-511a-d94c444222f4 (sender not in table [pbapp6-49654]), view=[pbapp6-49654|0] [pbapp6-49654]
....
McastSenderTest:
-bash-3.1$ java -cp jgroups-3.0.9.Final.jar org.jgroups.tests.McastSenderTest -m cast_addr 228.6.7.8 -port 46655
Socket #1=0.0.0.0/0.0.0.0:46655, ttl=32, bind interface=/fe80:0:0:0:216:3eff:fe5 a:3f76%3
Socket #2=0.0.0.0/0.0.0.0:46655, ttl=32, bind interface=/192.168.6.102
Socket #3=0.0.0.0/0.0.0.0:46655, ttl=32, bind interface=/0:0:0:0:0:0:0:1%1
Socket #4=0.0.0.0/0.0.0.0:46655, ttl=32, bind interface=/127.0.0.1
> << Received packet from fe80:0:0:0:216:3eff:fe5a:3f76%3:46655: çd'°µÑö6 ÿòçd'°µÑö6ÿò§ª8ISPN
<< Received packet from fe80:0:0:0:216:3eff:fe5a:3f76%3:46655: çd'°µÑö6ÿòçd'°µÑ ö6ÿò§ª8ISPN
<< Received packet from fe80:0:0:0:216:3eff:fe5a:3f76%3:46655: çd'°µÑö6ÿòçd'°µÑ ö6ÿò§ª8ISPN
<< Received packet from fe80:0:0:0:216:3eff:fe5a:3f76%3:46655: çd'°µÑö6ÿòçd'°µÑ ö6ÿò§ª8ISPN
<8ISPNeived packet from fe80:0:0:0:216:3eff:fe5a:3f76%3:46655: ¬!Àµ{ V !Àµ{ V <8ISPNeived packet from fe80:0:0:0:216:3eff:fe5a:3f76%3:46655: ¬!Àµ{ V !Àµ{ V <8ISPNeived packet from fe80:0:0:0:216:3eff:fe5a:3f76%3:46655: ¬!Àµ{ V !Àµ{ V <8ISPNeived packet from fe80:0:0:0:216:3eff:fe5a:3f76%3:46655: ¬!Àµ{ V !Àµ{ V << Received packet from fe80:0:0:0:216:3eff:fe5a:3f76%3:46655: ¬!Àµ{ V !Àµ{ V 8ISPN