I am doing the same thing. I am using the the MS NLB to handle the load balancing while JBOSS is doing its own cluster. What I have found is, there seems to be a conflict between the two applications (NLB & JBOSS). As of yet, I don't know why this is, however, it has to do with multicasting. Basically when you create an NLB cluster, JBOSS clustering stops working. NLB can run in either unicast or multicast mode, but, AFAIK, typical installs are multicast mode, though MS sets the default to unicast (we run in multicast because we have too). If you delete the NLB cluster, JBOSS clustering starts to work again.
I also used the JGroups 2.5.0 testing tools to demonstrate this.
If someone has a clue as to why this is happening, please let us know.
I don't know much about NLB. But from looking at http://technet2.microsoft.com/WindowsServer/f/?en/library/1611cae3-5865-4897-a186-7e6ebd8855cb1033.mspx I get the feeling you should have multiple adapters on your servers, with NLB working on one and your JBoss intra-cluster traffic on the other.
Also note that you can run JBoss clustering without using UDP multicast; a sample protocol stack is shown here: http://wiki.jboss.org/wiki/Wiki.jsp?page=JGroupsStackTCP . The standard clustering service deployment descriptors also include commented-out example TCP configs. I have no idea whether that would work any better with NLB though.
Ok, well switching to TCP did the trick, however, we still could not use MS NLB because it has some other requirements we were not able to overcome. MS NLB requires an ARP address statically added to the router/switch in the mode we were running it in, which is not the preferred method in the environment we were deploying in.
So, instead, we have switched to a HW based NLB solution. I then switched everything back to UDP multicasting, and all seems happy...so far ;)
Still, keep in mind, there is some strange incompatibility between MS NLB and UDP multicast JBOSS clusters. Just my 2 cents ;)