Wildfly 10 standalone-ha cluster with infinispan on AWS
asher_bitton Jul 18, 2016 3:08 AMHi,
We are building a Wildfly 10 standalone-ha cluster (made of 2 to 4 nodes) with a LB in AWS (moving from a VMWare based Wildfly8 standalone with a keepalive LB and Apache frontending the wildfly servers). After going through all of the setup everything seems to be working fine, Testing the cluster (taking up\down the different nodes) we can see that the sessions behave properly and the client does not notice the nodes interruptions.
The only problem is that the Wildfly logs (of both nodes in the cluster) are full of the exception:
08:58:53,829 WARN [org.infinispan.transaction.tm.DummyTransaction] (SessionExpirationScheduler - 1) ISPN000112: exception while committing: javax.transaction.xa.XAException
The exceptions seems to be happening every few seconds, regardless of traffic to the cluster.
Following other posts on the forum we have used the following infinispan configuration:
<replicated-cache name="portalCache" mode="SYNC">
<transaction locking="OPTIMISTIC"/>
<locking isolation="READ_COMMITTED"/>
<transaction mode="BATCH"/>
<file-store/>
</replicated-cache>
And on the stack side we used:
<stack name="s3"> |
<transport type="TCP" socket-binding="jgroups-tcp"/>
<protocol type="S3_PING">
<property name="access_key">
my-access-key
</property>
<property name="secret_access_key">
my-secret-access-key
</property>
<property name="location">
my-buckect-name
</property>
</protocol>
<protocol type="MERGE3"/>
<protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
<protocol type="FD"/>
<protocol type="VERIFY_SUSPECT"/>
<protocol type="pbcast.NAKACK2">
<property name="use_mcast_xmit">false</property> | |||||||||||
<property name="use_mcast_xmit_req">false</property> | |||||||||||
</protocol> |
<protocol type="UNICAST3"/>
<protocol type="pbcast.STABLE"/>
<protocol type="pbcast.GMS"/>
<protocol type="MFC"/>
<protocol type="FRAG2"/>
<protocol type="RSVP"/>
</stack>
We would like to know that this is not a problem and how we can make sure the message goes away (preferably by fixing the root cause of it).