Both need to be specified in a word. Key is whether you choose to shared the JGroups channel between cluster and treecache instances.
Sometimes your IT just one mcast address and port to be used, for example. But other times, you want to segregated the traffice so they warrant two separate channels.
I did not realize this was possible, so I tried configuring JBossCache to use the same JGroups configuration so I could share the channel as the previous post mentioned, but once deployed, the two components do not appear to interact nicely with each other (it would have been really cool if they did).
I took the JBossCache 1.2 configuration specified in the file replAsync-service.xml and set its cluster name, mcast address, and mcast port to be the same as the JBoss 3.2.7 cluster configuration. Then I deployed it (the file).
First, each channel appears to see the other as a peer in the cluster. I guess it makes since that this is happening (two JChannel instances are being created), but it means that on a single node, I will have "two" cluster members. That seems odd. It would be neat, I think, if the cache could piggyback the application server cluster rather than exist as a cluster member (or is this a bad idea?).
Second, upon recognizing the existence of the "peer", JBossCache attempts to query the peer for its cache state. This is where everything appears to go wrong. The complete message/stack trace is as follows:
03:06:46,890 INFO [TreeCache] viewAccepted(): new members: [aaaaa:1357 (additional data: 18 bytes), bbbbb:1369] 03:06:46,900 INFO [DefaultPartition] New cluster view for partition DefaultPartition (id: 1, delta: 1) : [192.168.1.100:1099, 192.168.1.100:1369] 03:06:46,900 INFO [DefaultPartition] I am (192.168.1.100:1099) received membershipChanged event: 03:06:46,900 INFO [DefaultPartition] Dead members: 0 () 03:06:46,900 INFO [DefaultPartition] New Members : 1 ([192.168.1.100:1369]) 03:06:46,911 INFO [DefaultPartition] All Members : 2 ([192.168.1.100:1099, 192.168.1.100:1369]) 03:06:49,895 ERROR [FD_SOCK] received null cache; retrying 03:06:51,888 INFO [TreeCache] state could not be retrieved (must be first member in group) 03:06:53,400 ERROR [FD_SOCK] received null cache; retrying 03:06:56,895 ERROR [FD_SOCK] received null cache; retrying 03:06:57,636 INFO [TreeCache] received the state (size=1463 bytes) 03:06:57,736 ERROR [TreeCache] failed unserializing state java.lang.ClassCastException at org.jboss.cache.TreeCache$MessageListenerAdaptor._setState(TreeCache.java:2828) at org.jboss.cache.TreeCache$MessageListenerAdaptor.setState(TreeCache.java:2797) at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.passUp(MessageDispatcher.java:614) at org.jgroups.blocks.RequestCorrelator.receive(RequestCorrelator.java:331) at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.handleUp(MessageDispatcher.java:722) at org.jgroups.blocks.MessageDispatcher$ProtocolAdapter.access$300(MessageDispatcher.java:554) at org.jgroups.blocks.MessageDispatcher$1.run(MessageDispatcher.java:691) at java.lang.Thread.run(Thread.java:534)
After this, the following message kept repeating (which is the application server querying the now dead cache I think):
03:07:03,705 ERROR [FD_SOCK] socket address for aaaaa:1357 (additional data: 18 bytes) could not be fetched, retrying
If JBossCache messaging can share the JBossCluster channel, is this the right way to go about it?