8 Replies Latest reply on Feb 20, 2014 5:45 AM by infinispan user

    crating single cachemanager in distribution mode

    infinispan user Newbie

      can any one help me how to create single cache manager over multiple machines in distribution mode..

        • 1. Re: crating single cachemanager in distribution mode
          Pedro Ruivo Novice

          Hi infinispan jboss,

           

          I advise you to take a look at quick-start: QuickStart

          Also, it may be useful to take a look in the user guide: UserGuide

           

          If you have a more precise question let me know.

           

          Cheers,

          Pedro

          • 2. Re: crating single cachemanager in distribution mode
            infinispan user Newbie

            Hi Pedro,

                 thanks  for  respond to this,

                 when i used Infinispan in distribution mode with numOwners=1 data is not shared across multiple nodes,,when i used numOwners=2 data is shared...what might be the problem??

            thanks,

            • 3. Re: crating single cachemanager in distribution mode
              Tristan Tarrant Master

              That's expected: numOwners=1 means there is only one owner for each element, therefore no redundancy.

              • 4. Re: crating single cachemanager in distribution mode
                infinispan user Newbie

                hi tristan,

                  if i used numOwners=1 data is stored on first node,thats ok.But if want to access data from second node i am not able to access .The value is null..can u provide any help on this.

                • 5. Re: crating single cachemanager in distribution mode
                  Tristan Tarrant Master

                  Does the cluster form ? Is state transfer enabled ? Can we see your configuration (both Infinispan and JGroups) and possibly a debug log of the startup of the application ?

                  • 6. Re: crating single cachemanager in distribution mode
                    infinispan user Newbie

                    yes cluster formed and state transfer is also enabled..these are my config files

                     

                     

                    jgroups-tcp.xml

                    ==================

                    <config xmlns="urn:org:jgroups"

                            xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

                            xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.4.xsd">

                        <TCP bind_addr="192.168.1.1"

                             bind_port="7800" port_range="10"

                             recv_buf_size="20000000"

                             send_buf_size="640000"

                             loopback="false"

                             max_bundle_size="64k"

                             bundler_type="old"

                             enable_diagnostics="true"

                             thread_naming_pattern="cl"

                     

                             timer_type="new"

                             timer.min_threads="4"

                             timer.max_threads="10"

                             timer.keep_alive_time="3000"

                             timer.queue_max_size="1000"

                             timer.wheel_size="200"

                             timer.tick_time="50"

                     

                             thread_pool.enabled="true"

                             thread_pool.min_threads="2"

                             thread_pool.max_threads="10"

                             thread_pool.keep_alive_time="5000"

                             thread_pool.queue_enabled="true"

                             thread_pool.queue_max_size="100000"

                             thread_pool.rejection_policy="discard"

                     

                             oob_thread_pool.enabled="true"

                             oob_thread_pool.min_threads="2"

                             oob_thread_pool.max_threads="10"

                             oob_thread_pool.keep_alive_time="5000"

                             oob_thread_pool.queue_enabled="false"

                             oob_thread_pool.queue_max_size="100"

                             oob_thread_pool.rejection_policy="discard"/>

                     

                    <!--   <MPING bind_addr="${jgroups.bind_addr:131.10.20.16}" break_on_coord_rsp="true"

                              mcast_addr="${jgroups.mping.mcast_addr:228.2.4.6}"

                              mcast_port="${jgroups.mping.mcast_port:43366}"

                              ip_ttl="${jgroups.udp.ip_ttl:2}"

                              num_initial_members="3" timeout="2000"/> -->

                     

                            <TCPPING initial_hosts="192.168.2.2[7800],192.168.1.1[7800]" port_range="2"

                             timeout="3000" num_initial_members="1" />

                        <MERGE2 max_interval="30000"

                                min_interval="10000"/>

                     

                        <FD_SOCK/>

                        <FD_ALL interval="2000" timeout="5000" />

                        <VERIFY_SUSPECT timeout="500"  />

                        <BARRIER />

                        <pbcast.NAKACK use_mcast_xmit="false"

                                       retransmit_timeout="100,300,600,1200"

                                       discard_delivered_msgs="true" />

                        <UNICAST3 conn_expiry_timeout="0"/>

                     

                        <pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000"

                                       max_bytes="10m"/>

                        <pbcast.GMS print_local_addr="true" join_timeout="5000"

                                    max_bundling_time="30"

                                    view_bundling="true"/>

                        <UFC max_credits="2M"

                             min_threshold="0.4"/>

                        <MFC max_credits="2M"

                             min_threshold="0.4"/>

                        <FRAG2 frag_size="60000"  />

                        <pbcast.STATE_TRANSFER  />

                    </config>

                     

                    infinispan-distribution.xml

                    ============

                    <infinispan

                            xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

                            xsi:schemaLocation="urn:infinispan:config:6.0 http://www.infinispan.org/schemas/infinispan-config-6.0.xsd"

                            xmlns="urn:infinispan:config:6.0">

                     

                       <global>

                         <transport>

                        <properties>

                       <property name="configurationFile" value="jgroups.xml"/>

                        </properties>

                         </transport>

                       </global>

                         <default>

                          <clustering mode="distribution">

                             <sync/>

                              <hash numOwners="1" />

                          </clustering>

                       </default>

                    </infinispan>

                    I am using same configuration in both nodes(192.168.2.2,192.168.1.1) ,,

                    help me is there any configuration problem

                    • 7. Re: crating single cachemanager in distribution mode
                      Tristan Tarrant Master

                      Why is the TCP bind_addr="131.10.20.16"  instead of 192.168.x.y ?

                      • 8. Re: crating single cachemanager in distribution mode
                        infinispan user Newbie

                        i forgot to change that it is also 192.168.1.1..

                         

                        if i use numOwners=1 on both nodes,on first node i add some data to cache,but it is added to second node why???