1 2 Previous Next 19 Replies Latest reply on Jun 14, 2014 6:59 AM by rituraj Go to original post
      • 15. Re: Infinispan error as "org.infinispan.commons.CacheException"
        pferraro

        I should also add that ultimate redundancy can be achieved by switching the web sessions cache to a <replicated-cache/> in which a given web session is stored on *every* node, no matter the size of the cluster.  You can think of this as a <distributed-cache/> where owners="N", where N is the variable size of the cluster.

        • 16. Re: Infinispan error as "org.infinispan.commons.CacheException"
          pferraro

          Also, I strongly discourage the use of start="EAGER".  This is only meant for use cases where there is no other mechanism to control the lifecycle of the cache (e.g. a cache used for remote access only).  For web applications, the lifecycle of the cache is bound to the lifecycle of the deployment.  This makes sense because the contents of the cache are only readable by the deployment itself.

          • 17. Re: Infinispan error as "org.infinispan.commons.CacheException"
            rituraj

            sure Paul ...we will make that change (EAGER) as well ...so let me conclude what i understood from your comments in short

             

            1) replication-cache == gives 100% availability but can reduce performance due to redundancy

            2) distributed-cache == can act as replicated as well (or dist-cache can be considered as subset of replication cache )

            distributed dosent replicate to all the members in the cluster...it will be replicated only to the ${primary_mem+owner-1} ....so we need to calculate how many owners we want in a cluster to safely meet the availability and better performance as well

            3) i have 4 cluster-groups

            cluser1-- having 12 nodes

            cluster2-- having 3 nodes

            cluster3 -- having 4 nodes

            cluster4 --having 5 nodes all running in the ha profile...can we use

            owner=${number_of_cluster_mem -1}...in place of a fix value as 2 or 4 ...as i want to assign them different values wrt to my cluster

             

            also we have just moved to wildfly-8.1.0 and i am getting 2 warn which was not there in 8.0.0 as ...can you tell us what are we missing here ...

             

            WARN  [org.jboss.as.txn] (ServerService Thread Pool -- 30) JBAS010153: Node identifier property is set to the default value. Please make sure it is unique.

            WARN  [org.wildfly.extension.mod_cluster] (ServerService Thread Pool -- 37) JBAS011706: Metric of type 'mem' is no longer supported and will be ignored

             

             

            Thanks a lot Paul for all your help !!

            • 18. Re: Infinispan error as "org.infinispan.commons.CacheException"
              pferraro

              1) replication-cache == gives 100% availability but can reduce performance due to redundancy

              Technically, only REPL_SYNC gives you full availability.  Using replication-cache with the UDP-based stack also has the advantage of being able to use a single multicast instead of multiple unicasts.

              2) distributed-cache == can act as replicated as well (or dist-cache can be considered as subset of replication cache )

              distributed dosent replicate to all the members in the cluster...it will be replicated only to the ${primary_mem+owner-1} ....so we need to calculate how many owners we want in a cluster to safely meet the availability and better performance as well

              Effectively, although with replicated-cache the number of owners is equal to the cluster size, whereas with distributed-cache the number of owners is a fixed value.  I guess you can think of it as ${primary_mem+owner-1}, although, more simply, a given cache entry is only stored on N nodes (where owners=N), where the node that created the cache entry will be one of the initial N owners.  And remember, ownership can change whenever the cluster topology changes.

              WARN  [org.jboss.as.txn] (ServerService Thread Pool -- 30) JBAS010153: Node identifier property is set to the default value. Please make sure it is unique.

              AIUI, the node identifier in question is the one defined in the transaction subsystem and is used (I think) when creating transaction identifiers.  This has nothing to do with clustering.  I think it is safe to set this to the node name, i.e. ${jboss.node.name}

              WARN  [org.wildfly.extension.mod_cluster] (ServerService Thread Pool -- 37) JBAS011706: Metric of type 'mem' is no longer supported and will be ignored

              We removed support for this load metric from mod_cluster for a couple reasons:

              1. Available system memory is not that great a measure of the load of a node
              2. Determining the amount of free memory on a system via the JVM was shown to be unreliable/inaccurate on certain operating systems.

              Instead, I recommend using something like "cpu" or "sessions" or "busyness".

              More about mod_cluster load metrics here:

              Chapter 10. Server-Side Load Metrics

              The mod_cluster docs were written for AS6, so ignore the out of date configuration examples.  The names of the available load metrics in WildFly are enumerated here:

              wildfly/build/src/main/resources/docs/schema/jboss-as-mod-cluster_1_2.xsd at 8.1.0.Final · wildfly/wildfly · GitHub

              • 19. Re: Infinispan error as "org.infinispan.commons.CacheException"
                rituraj

                Thanks Paul for clarifying so many details regarding infinispan and for your quick turnaround ...we have understood a lot of things and we are going to apply the configuration in WF-8.1.0 ...

                 

                Thanks again!!!

                -Rituraj

                1 2 Previous Next