3 Replies Latest reply on Apr 29, 2013 12:26 PM by belaban

    Infinispan physical layer (JGroups->UDP->IPoIB->Infiniband)

    cotton.ben

      Hi,

       

      Our project has chosen Infinispan 5.x as our cache/grid provider.  Infinispan will sit in front of our back-end ExaData appliance that serves as our transactional data system of record.  Some questions:

       

      1.  Are all Infinispan's replication/distribution/coherency-heartbeat network I/O level (Socket/SocketChannel) operations implemented via provider=JGroups?

       

      2.  I see that JGroups directly supports transport/network layer provider=UDP/IP, so if I have a physical provider=Infiniband (replacing physical provider=Ethernet)  I assume that all my application code will work seamlessly, transparently, via the IP layer.  i.e.  I don't have to change any existing application code (nor Infinispan config) to use Infinispan (via JGroups) on physical provider=Infiniband.  Can you confirm this?

       

      3.  Does JGroups directly support a config of IPoIB?

       

      4.  Does JGroups on Java 7 directly support SDP (Socket Direct Protocol) and the NIO2 available AsynchronousSocketChannel API?

       

      5. We have a souped-up Linux stack that supports all of Java7/SDP,  IPoIB, and Infiniband's native VERBS library, is there *any* Java API bridge that Infinispan (over JGroups over Java7 over SDP)  exposes that will allow me to get directly to the native VERBS specific capabilities?  Or, is it more likely that (to get directly to the Infinispan-native VERBS capability API)  I will have to use a Java JNI bridge to the Linux libraries and OS system calls?


      Thanks,

      Ben

        • 1. Re: Infinispan physical layer (JGroups->UDP->IPoIB->Infiniband)
          belaban

          Hi Ben,

           

          1. Yes. There are some cache loaders which access remote cache systems that may be written in a different protocol, e.g. Cassandra

           

          2. Yes - we have done this before. However, IIRC, Infinispan had a max datagram size limit of 4k, and so we made a change to the default config and added FRAG just over the transport (UDP). Not very nice, and not an issue for TCP as transport.

           

          3. No. Some time ago I looked at SDP and RDMA/verbs (jVerbs), but haven't tried them out. I'm planning to take a closer look at the transport based on NIO2 (I need multicast channels) in 3.5, but since NIO2 requires JDK 7, I'm not sure I'll be able to baseline on that just yet.

           

          4. If run on Solaris or Linux, the way I understand SDP (haven't tried this out yet !), TCP or UDP connections should work seamlessly, but I don't see how multicasts would work, unless the SDP config file also allows for class D addresses as targets. Re NIO2: I don't yet use NIO2, see above.

           

          5. No.

           

          Having said that, it should be possible to write a RDMA transport, which uses either a Java verbs APi (jVerbs ?) or JNI/RDMA verbs to interface with C. Note that the latter will not make it into the JGroups core, as I want JGroups to remain pure Java. It could be a linked project though.

           

           

          [1] https://issues.jboss.org/browse/JGRP#selectedTab=com.atlassian.jira.plugin.system.project%3Aroadmap-panel

          • 2. Re: Infinispan physical layer (JGroups->UDP->IPoIB->Infiniband)
            cotton.ben

            Thank you so much Bela for this excellent response.

            • 3. Re: Infinispan physical layer (JGroups->UDP->IPoIB->Infiniband)
              belaban

              No problem. Let me know your requirements, so I can think about accommodating them in a 3.5 release.

              Cheers,