Version 2

    Integration of TOA/TOM into Infinispan 5.3

    - the initial integration will have a blocking state transfer, still built on top of NBST code, but a simplified

    - full NBST support over TOA/TOM will be added in the scope of this release

    - we had a peer review between Pedro, Dan and Mircea. The pull requets is still opened and will be

    integrated once the review is finalized, together with an alpha release.

     

    Message bundling and OOB messages

     

    In 3.3, all messages will be bundled, not just regular messages, but

    also OOB messages. The way this works on the sender side is:

    - A thread sending a message in the transport adds it to a queue

    - There's one thread which dequeues messages and sends them as bundles

      - It sends a messages bundle if the max size has been reached, or

        there are no more messages in the queue

      - This means single messages are sent immediately, or we fill up a

        bundle (in a few microseconds) and send it

     

    Impact on Infinispan:

    - Use DONT_BUNDLE instead of OOB if you don't want to bundle messages

      - However, even DONT_BUNDLE might get deprecated

    - If we have 1 sender invoking sync RPCs, we don't need to set

      DONT_BUNDLE anymore

    - If we have multiple senders invoking sync RPCs, performance should

      get better as RPCs and responses are bundled

    - Since bundling will result in message *batches* on the receiver,

      performance should increase in general

    - this is scheduled for integration in ISPN in 5.3: ISPN-2848

      

    Message batching

     

    Message bundles sent by a sender are received as message batches

    (MessageBatch) by the receivers. When a batch is received, the batch

    is passed up using up(MessageBatch).

     

    Protocols can remove / replace / add messages in a batch and pass the

    batch further up.

     

    The advantage of a batch is that resources such as locks are acquired

    only once for a batch of N messages rather than N times. Example: when

    NAKACK2 receives a batch of 10 messages, it adds the 10 messages to

    the receiver table in a bulk operation, which is more efficient than

    doing this 10 times.

     

    Further optimizations on batching (probably 3.4):

    - Remove similar ops, e.g. UNICAST3 acks for A:15, A:25 and A:35 can

      be clubbed together into just ack(A:35)

    - Merge similar headers, e.g. multicast messages 20-30 can be orderd

      by seqno, and we simply send a range [20..30] and let the receiver

      generate the headers on the fly

     

    Async Invocation API (AIA)

    Infinispan will benefit from this in two ways:

    Avoid deadlock of OOB/regular threads

    - Infinispan will use its own thread pool for sending messages. For a complete

    discussion around this please follow the JIRA (ISPN-2808) or/and the

    discussion on the mailing list.

    - this is scheduled for integration in ISPN 5.3.

     

    Avoid keeping threads BLOCKED when waiting for locks to be acquired

    JGroups only passes up messages to Infinispan, which then uses its own

    thread pool to deliver them. E.g. based on Pedro's code for TO, we

    could parallelize delivery based on the target keys of the

    transaction. E.g if we have tx1 modifying keys {A,B,C} and tx2

    modifying keys {T,U}, then tx1 and tx2 can be run concurrently.

     

    If tx1 and tx2 modify overlapping key sets, then tx2 would be queued

    and executed *after* tx1, not taking up a thread from the pool,

    reducing the chances of the thread pool maxing out and also

    ensuring different threads are not going to contend on the locks

    on same keys.

     

    The implementation could be done in an interceptor fronting the interceptor stack, which queues dependent TXs and

    - when ready to be executed - sends them up the interceptor stack on a thread from

    the internal pool.

     

    Infinispan having its own thread pool means that JGroups threads will

    not block anymore, e.g. trying to acquire a lock for a TX. The size of

    those pools can therefore be reduced.

     

    The advantage of AIA is that it's up to Infinispan, not JGroups, how

    to deliver messages. JGroups delivers messages based on the order in

    which they were sent by a sender (FIFO), whereas Infinispan can make

    much more informed decisions as to how to deliver the messages.

     

    This is scheduled for a future Infinispan release (ISPN-2849).

     

    Internal thread pool for JGroups

     

    All JGroups internal messages use the internal thread pool (message

    flag=INTERNAL). Not having to share the OOB pool with apps (such as

    Infinispan) means that internal messages can always be processed, and

    are not discarded or blocked, e.g. by a maxed out thread pool.

     

    The internal pool can be switched off, and - if AIA is implemented in

    Infinispan - the number of OOB and regular threads can be massively

    reduced. The internal thread pool doesn't need to be big either.

     

    UNICAST3

     

    Successor to UNICAST and UNICAST2, best of both worlds. Acks single

    messages quickly, so we have no first-msg-lost or last-msg-lost issues

    anymore. Doesn't generate many acks though.

     

    Proposed to trigger and ACK only after a certain number of messages

    rather than after any batch to avoid ACK on small batches.

     

    https://issues.jboss.org/browse/JGRP-1594

     

    Roadmap Infinispan 5.3 / JGroups 3.3

    - Release JGroups 3.3. with an internal thread pool, target ca. first week of March

    - Use this in Infinispan (incl. AIA with ISPN-owned threadpool but not yet TX ordering and parallel delivery)