4 Replies Latest reply on May 23, 2016 4:03 AM by gustavonalle

    Infinispan and jgroups Implementation on Openshift Stack

    sidhm12

      Question-1

      --------------

      I am trying run an Infinispan cluster on Openshift Tomcat gears. With nodes sitting on two different hosts.I am using TCP as the data transfer protocol and MPING as the discovery protocol.

       

      If I try to use any of the JGROUPS provided key word for bind address like GLOBAL, SITE_LOCAL, LINK_LOCAL, NON_LOOPBACK, match-interface, match-host, match-address, except for LOOPBACK it binds the service to public IP(64.X.X.X) and for LOOPBACK  it binds it to 127.0.0.1.This is not what I want to achieve.

       

      I want it to run the JGROUPS service on the custom IP address provided by Openshift which looks some what like this 127.2.155.1. If I am able to run it in the given IP then it will be easy for me to write Port forwarding rules so that the cluster members will be able to discover each other even if they exist in different hosts.

       

      Using environment property

       

          Map<String, String> envKeys = System.getenv();

              for (String keys : envKeys.keySet()) {

                  System.out.println(keys + ":" + envKeys.get(keys));

                  if (keys.equalsIgnoreCase("OPENSHIFT_JBOSSEWS_IP")) {

                      System.setProperty("OPENSHIFT_JBOSSEWS_IP",  envKeys.get(keys));

                  }

              }

       

      It fails while doing above saying could not find the IP address or 127.2.155.1 is an invalid IP address. Please find the sample jgroups.xml I am using in my project.

       

          <config xmlns="urn:org:jgroups"

              xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

              xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema  /JGroups-3.4.xsd">

          <TCP

              bind_addr="${OPENSHIFT_JBOSSEWS_IP}"

              bind_port="${jgroups.tcp.port:7800}"

              port_range="0"

              recv_buf_size="20m"

              send_buf_size="640k"

              max_bundle_size="31k"

              use_send_queues="true"

              enable_diagnostics="false"

              bundler_type="sender-sends-with-timer"

       

              thread_naming_pattern="pl"

       

              thread_pool.enabled="true"

              thread_pool.min_threads="2"

              thread_pool.max_threads="30"

              thread_pool.keep_alive_time="60000"

              thread_pool.queue_enabled="true"

              thread_pool.queue_max_size="100"

              thread_pool.rejection_policy="Discard"

       

              oob_thread_pool.enabled="true"

              oob_thread_pool.min_threads="2"

              oob_thread_pool.max_threads="30"

              oob_thread_pool.keep_alive_time="60000"

              oob_thread_pool.queue_enabled="false"

              oob_thread_pool.queue_max_size="100"

              oob_thread_pool.rejection_policy="Discard"

       

              internal_thread_pool.enabled="true"

              internal_thread_pool.min_threads="2"

              internal_thread_pool.max_threads="4"

              internal_thread_pool.keep_alive_time="60000"

              internal_thread_pool.queue_enabled="true"

              internal_thread_pool.queue_max_size="100"

              internal_thread_pool.rejection_policy="Discard"

              />

       

             <!-- Ergonomics, new in JGroups 2.11, are disabled by default in TCPPING until JGRP-1253 is resolved -->

             <!--

             <TCPPING timeout="3000"

                  initial_hosts="localhost[7800],localhost[7801]"

                  port_range="5"

                  num_initial_members="3"

                  ergonomics="false"

              />

             -->

       

            <MPING bind_addr="{OPENSHIFT_JBOSSEWS_IP}"

            break_on_coord_rsp="true"

            mcast_addr="${jgroups.mping.mcast_addr:228.2.4.6}"

            mcast_port="${jgroups.mping.mcast_port:43376}"

            ip_ttl="${jgroups.udp.ip_ttl:2}"

            num_initial_members="3"/>

            <MERGE3/>

       

           <FD_SOCK/>

           <FD timeout="3000" max_tries="5"/>

           <VERIFY_SUSPECT timeout="1500"/>

       

           <pbcast.NAKACK2 use_mcast_xmit="false"

                         xmit_interval="1000"

                         xmit_table_num_rows="100"

                         xmit_table_msgs_per_row="10000"

                         xmit_table_max_compaction_time="10000"

                         max_msg_batch_size="100"/>

           <UNICAST3 xmit_interval="500"

                   xmit_table_num_rows="20"

                   xmit_table_msgs_per_row="10000"

                   xmit_table_max_compaction_time="10000"

                   max_msg_batch_size="100"

                   conn_expiry_timeout="0"/>

       

           <pbcast.STABLE stability_delay="500" desired_avg_gossip="5000" max_bytes="1m"/>

           <pbcast.GMS print_local_addr="false" join_timeout="3000" view_bundling="true"/>

           <tom.TOA/> <!-- the TOA is only needed for total order transactions-->

       

           <MFC max_credits="2m" min_threshold="0.40"/>

           <FRAG2 frag_size="30k"/>

           <RSVP timeout="60000" resend_interval="500" ack_on_delivery="false" />

           </config>

       

      Question-2

      --------------

      When Infinispan is successfully started, It runs 2 java processes one on the port 7800 as mentioned in the above config file and other on a port number randomly picked up by Infinispan. I would like to understand more about the processes.

       

          **COMMAND   PID     USER   FD   TYPE     DEVICE SIZE/OFF NODE NAME**

          java     5640     1334   44u  IPv4 1556890779      0t0  TCP 127.2.155.1:7800 (LISTEN)

          java     5640     1334   44u  IPv4 1556890779      0t0 TCP 127.2.155.1:20772 (LISTEN)