1 2 Previous Next 19 Replies Latest reply on Oct 11, 2017 5:35 PM by jbertram

    JMS Clustering with Wildfly-10.1 application

    kpreeta12

      We are going to have JMS configured on a seperate remote Cluster. The application which is clustered is running on a seperate Wildfly-10.1 Cluster. So the application now has to do a remote lookup to the JMS using the lookup URL.

        • 1. Re: Wildfly 10.1-Final with load-balancing feature
          jbertram

          What exactly are you trying to load-balance with regards to JMS?  Are you just load-balancing the JNDI requests?  Assuming you've configured Artemis to be clustered I'm not sure why this would be necessary as Artemis already has load-balancing functionality built-in to the cluster.

          • 2. Re: Wildfly 10.1-Final with load-balancing feature
            kpreeta12

            Hi Justin,

             

            Thanks for responding. Well I was not aware that Artemis already has load-balancing functionality built in to cluster. I actually am yet to try out JMS clustering with wildfly 10.1.

             

            Earlier I tried the JMS clustering (hornetq) with wildfly 8.2 and let me attach the design with this so you know how I managed to use nginx as load-balancer.

             

            Please take a look at the attached pic. So basically I used a global load-balancer as nginx and then there were local load-balancers managing the live and backup servers.

             

            Please let me know if there is a better way I can follow with wildfly 10.1 that uses Activemq

             

            Thanks,

            Preeta

            • 3. Re: Wildfly 10.1-Final with load-balancing feature
              jbertram

              Well I was not aware that Artemis already has load-balancing functionality built in to cluster. I actually am yet to try out JMS clustering with wildfly 10.1.

              HornetQ in Wildfly 8.2 also has the same load-balancing functionality.

               

              Please take a look at the attached pic. So basically I used a global load-balancer as nginx and then there were local load-balancers managing the live and backup servers.

              The picture doesn't make clear what exactly is going through the load-balancer.  Is it just balancing JNDI lookups?  Please clarify.

              • 4. Re: Wildfly 10.1-Final with load-balancing feature
                kpreeta12

                I am pasting below the ngnix.conf for your reference:

                 

                 

                events {  }

                 

                http {

                map $http_upgrade $connection_upgrade {

                        default upgrade;

                        ''      close;

                    }

                    include       mime.types;

                    default_type  application/octet-stream;

                 

                 

                 

                    server {

                        listen       8181;

                        server_name  10.76.82.38; // nginx load-balancer

                 

                 

                 

                     

                     

                        location / {

                            proxy_pass        http://jboss;

                            proxy_http_version 1.1;

                            proxy_set_header Upgrade $http_upgrade;

                            proxy_set_header Connection $connection_upgrade;

                            proxy_next_upstream  error timeout invalid_header http_500;

                            proxy_connect_timeout 2;

                            proxy_set_header  Host $host;

                            proxy_set_header  X-Real-IP $remote_addr;

                            proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;

                         

                        }

                 

                 

                   }

                 

                   upstream jboss {

                      # Sticky session

                      ip_hash;

                 

                      server 10.76.82.120:8080; // wildfly node1 (JMS)

                      server 10.76.82.121:8080; // wildfly node2 (JMS)

                   }

                 

                }

                 

                 

                Also the domain.xml of PSC application that's on a seperate VM. This PSC application needs to interact with the JMS cluster using the JMS-JNDI url.

                 

                Below please find the section of domain.xml

                 

                <socket-binding-group name="ha-sockets" default-interface="public">

                            <socket-binding name="management-http" interface="management" port="${jboss.management.http.port:9990}"/>

                            <socket-binding name="management-https" interface="management" port="${jboss.management.https.port:9993}"/>

                            <socket-binding name="ajp" port="${jboss.ajp.port:8009}"/>

                            <socket-binding name="http" port="${jboss.http.port:8080}"/>

                            <socket-binding name="https" port="${jboss.https.port:8443}"/>

                            <socket-binding name="jgroups-mping" port="0" multicast-address="${jboss.default.multicast.address:230.0.0.4}" multicast-port="45700"/>

                            <socket-binding name="jgroups-tcp" port="7600"/>

                            <socket-binding name="jgroups-tcp-fd" port="57600"/>

                            <socket-binding name="jgroups-udp" port="55200" multicast-address="${jboss.default.multicast.address:230.0.0.4}" multicast-port="45688"/>

                            <socket-binding name="jgroups-udp-fd" port="54200"/>

                            <socket-binding name="modcluster" port="0" multicast-address="224.0.1.105" multicast-port="23364"/>

                            <socket-binding name="txn-recovery-environment" port="4712"/>

                            <socket-binding name="txn-status-manager" port="4713"/>

                            <outbound-socket-binding name="remote-http">

                                <remote-destination host="10.76.82.38" port="${jboss.http.port:8181}"/>

                            </outbound-socket-binding>

                            <outbound-socket-binding name="mail-smtp">

                                <remote-destination host="localhost" port="25"/>

                            </outbound-socket-binding>

                        </socket-binding-group>

                • 5. Re: Wildfly 10.1-Final with load-balancing feature
                  jbertram

                  I'm not really concerned with the nginx.conf.  I just want to know what's going through the load-balancer from an application perspective.  Are you able to answer this question?

                  • 6. Re: Wildfly 10.1-Final with load-balancing feature
                    kpreeta12

                    I have added a section of domain.xml of our application  in my previous message :

                     

                    <outbound-socket-binding name="remote-http">

                                    <remote-destination host="10.76.82.38" port="${jboss.http.port:8181}"/>

                                </outbound-socket-binding>

                     

                     

                    So its the JMS jndi url that connects to the JMS through the load balancer. I hope I made it clear this time.

                     

                    So, actually yes its just balancing JNDI lookups.

                    • 7. Re: Wildfly 10.1-Final with load-balancing feature
                      jbertram

                      I think you may be confused about how JMS and JNDI are related.  They are both 100% independent.  When you look something up in JNDI it could be any kind of resource.  There's no such thing as "JMS jndi," per se.  The JMS specifications sets forth the convention that admin objects (e.g. connection factories and destinations) can be looked up via JNDI.

                       

                      Furthermore, when a HornetQ or Artemis connection factory is looked up via JNDI the client simply gets back a stub which is used internally to create a connection to the broker.  Depending on the configuration that connection may or may not use the same host name and port as the JNDI lookup.  Again, the two things are completely independent.

                       

                      Couple of points...

                      • As noted previously, both HornetQ and Artemis have load-balancing functionality built into the cluster so if the only thing you're pushing through your load-balancer is JMS-related JNDI lookups then you can probably just ditch it completely and simplify your architecture.  When clients use a connection factory then the broker they connect to will be determined by the load-balancing-policy configured on the connection factory (round-robin by default).
                      • If you're connecting to the JMS broker from another instance of Wildfly then you should leverage the power of Java EE on the client's application server.  For example, if you're consuming messages then you should use MDB, and if you're sending messages then you should use a local <pooled-connection-factory>.
                      • It's not clear to me that you actually need load-balancing at this point.  A single instance of HornetQ or Artemis can handle thousands of messages per second.  You could potentially simplify your architecture further by having just a single live and backup pair.
                      • 8. Re: Wildfly 10.1-Final with load-balancing feature
                        kpreeta12

                        Hi Justin,

                         

                        I am back with JMS clustering now. We have now upgraded our application to use wildfly 10.1 cluster and things are good. Now I need to start on JMS clustering.

                         

                        My question is this. You said I should simplify the architecture by having just single live and backup pair. This sounds good but I have this question.

                         

                        Lets say our application running on wildfly 10.1 cluster, is doing the JMS client lookup remotely. The JMS Queues are configured on another 2 node Wildfly-10.1 cluster that correspond to single live and backup pair.  Well I was intending to push through the load-balancer just the JMS-related JNDI lookups from the application cluster to the remotely configured JMS (which has the live and backup pair).

                         

                        Now my question is can we avoid having an external load-balancer (nginx) since you said the Artemis has load-balancing functionality built into it ? If so, how do I construct the lookup url ?

                         

                        BEEERequisitions.JndiProviderUrl=http-remoting://10.76.80.29:6080. Basically my question is what should be the ip address in the lookup url ? Should it be the live-server's ip-address or the backup-server's ip-address ? If there was a load-balancer (nginx) I can very well use the ip address of the load-balancer. I hope my question makes sense.

                         

                         

                        Thanks,

                        Preeta

                        • 9. Re: JMS Clustering with Wildfly-10.1 application
                          jbertram

                          Lets say our application running on wildfly 10.1 cluster, is doing the JMS client lookup remotely.

                          Your question assumes that your application should be doing a remote JNDI look-up for JMS resources.  However, I would encourage you to use managed resources (i.e. one of the great benefits of using an application server in the first place) so that you don't have to do remote JNDI lookups.  For example, if you're consuming message you should be using an MDB whose activation configuration simply points to the remote live/backup pair.  There's no need for JNDI lookups in this scenario.  Or if you're producing messages then you should configure a pooled-connection-factory to point to the remote live/backup pair and look that up locally rather than remotely.  You'll get the speed of a local lookup plus the performance benefits of a connection pool as well as simplified development (since you won't have to spend time managing the connections) and management (since the configuration and management will be facilitated by XML and the JBoss CLI).  If you do happen to need to lookup a specific destination hosted on the remote live/backup pair you can configure the same one on the local machine and look that up or just instantiate it directly in your code rather than doing a JNDI lookup.

                           

                          I'll answer the rest of your questions assuming that there's some kind of situation where you are absolutely forced to do a remote JNDI lookup...

                           

                          Now my question is can we avoid having an external load-balancer (nginx) since you said the Artemis has load-balancing functionality built into it ?

                          Yes, you can definitely avoid having an external load-balancer.

                           

                           

                          If so, how do I construct the lookup url ?

                          You'd put both the live and the backup details into the URL.

                           

                           

                          Basically my question is what should be the ip address in the lookup url ? Should it be the live-server's ip-address or the backup-server's ip-address ?

                          You should use something like this:

                           

                          BEEERequisitions.JndiProviderUrl=http-remoting://10.76.80.29:6080,http-remoting://10.76.80.30:6080

                           

                          In this configuration if the connection to the live (i.e. 10.76.80.29:6080) fails then it will attempt to connect to the backup (i.e. 10.76.80.30:6080).

                          • 10. Re: JMS Clustering with Wildfly-10.1 application
                            kpreeta12

                            Thanks a lot for the response with elaborate explanation.

                             

                            I need to try out getting rid of the remote JMS lookup once I get the current setup working with jms cluster.

                             

                            In the domain.xml, we have the below (highlighted in bold): <remote-destination> which corresponds to the JMS lookup url and port. So probably, we should have 2 entries instead of one as shown below for a JMS cluster?

                             

                            <socket-binding-group name="ha-sockets" default-interface="public">

                                        <socket-binding name="management-http" interface="management" port="${jboss.management.http.port:9990}"/>

                                        <socket-binding name="management-https" interface="management" port="${jboss.management.https.port:9993}"/>

                                        <socket-binding name="ajp" port="${jboss.ajp.port:8009}"/>

                                        <socket-binding name="http" port="${jboss.http.port:8080}"/>

                                        <socket-binding name="https" port="${jboss.https.port:8443}"/>

                                        <socket-binding name="jgroups-mping" port="0" multicast-address="${jboss.default.multicast.address:230.0.0.4}" multicast-port="45700"/>

                                        <socket-binding name="jgroups-tcp" port="7600"/>

                                        <socket-binding name="jgroups-tcp-fd" port="57600"/>

                                        <socket-binding name="jgroups-udp" port="55200" multicast-address="${jboss.default.multicast.address:230.0.0.4}" multicast-port="45688"/>

                                        <socket-binding name="jgroups-udp-fd" port="54200"/>

                                        <socket-binding name="modcluster" port="0" multicast-address="224.0.1.105" multicast-port="23364"/>

                                        <socket-binding name="txn-recovery-environment" port="4712"/>

                                        <socket-binding name="txn-status-manager" port="4713"/>

                                        <outbound-socket-binding name="remote-http">

                                            <remote-destination host="10.76.80.29" port="${jboss.http.port:6080}"/>

                                            <remote-destination host="10.76.80.30" port="${jboss.http.port:6080}"/>

                                        </outbound-socket-binding>

                                        <outbound-socket-binding name="mail-smtp">

                                            <remote-destination host="localhost" port="25"/>

                                        </outbound-socket-binding>

                                    </socket-binding-group>

                            • 11. Re: JMS Clustering with Wildfly-10.1 application
                              jbertram

                              According to the configuration schema it's not even valid to have multiple remote-destination elements for an outbound-socket-binding.  Can you clarify exactly what you're trying to configure here?

                              • 12. Re: JMS Clustering with Wildfly-10.1 application
                                kpreeta12

                                Hi Justin,

                                 

                                Ok if thats not valid. Then,

                                 

                                Here is the messaging subsystem from domain.xml of Jboss EAP ( now we have both EAP and wildfly10 options). I am assuming, if we want to have a JMS cluster with 2 nodes. If each node uses this messaging subsystem as under belonging to same profile, then probably we need 2 http-connectors one for each node as highlighted in BOLD.

                                 

                                <subsystem xmlns="urn:jboss:domain:messaging-activemq:1.0">

                                                <server name="default">

                                                    <security-setting name="#">

                                                        <role name="jmsrole" delete-non-durable-queue="true" create-non-durable-queue="true" consume="true" send="true"/>

                                                    </security-setting>

                                                    <address-setting name="#" message-counter-history-day-limit="10" page-size-bytes="2097152" max-size-bytes="10485760" expiry-address="jms.queue.ExpiryQueue" dead-letter-address="jms.queue.DLQ"/>

                                                    <http-connector name="http-connector" endpoint="http-acceptor" socket-binding="my-http1"/>

                                                    <http-connector name="http-connector" endpoint="http-acceptor" socket-binding="my-http2"/>

                                                    <http-connector name="http-connector-throughput" endpoint="http-acceptor-throughput" socket-binding="http">

                                                        <param name="batch-delay" value="50"/>

                                                    </http-connector>

                                                    <in-vm-connector name="in-vm" server-id="0"/>

                                                    <http-acceptor name="http-acceptor" http-listener="default"/>

                                                    <http-acceptor name="http-acceptor-throughput" http-listener="default">

                                                        <param name="batch-delay" value="50"/>

                                                        <param name="direct-deliver" value="false"/>

                                                    </http-acceptor>

                                                    <in-vm-acceptor name="in-vm" server-id="0"/>

                                                    <jms-queue name="ExpiryQueue" entries="/ExpiryQueue jboss/exported/jms/queue/ExpiryQueue"/>

                                                    <jms-queue name="DLQ" entries="/DLQ jboss/exported/jms/queue/DLQ"/>

                                                    <jms-queue name="testQueue" entries="queue/test jboss/exported/jms/queue/test"/>

                                                    <jms-queue name="ISEEOutboundQueue" entries="jms/ISEEOutboundQueue jboss/exported/jms/queue/ISEEOutboundQueue"/>

                                                    <jms-queue name="ISEEInboundQueue" entries="jms/ISEEInboundQueue jboss/exported/jms/queue/ISEEInboundQueue"/>

                                                    <jms-queue name="BEEEAuthorizationsQueue" entries="jms/BEEEAuthorizationsQueue jboss/exported/jms/queue/BEEEAuthorizationsQueue"/>

                                                    <jms-queue name="BEEERequisitionsQueue" entries="jms/BEEERequisitionsQueue jboss/exported/jms/queue/BEEERequisitionsQueue"/>

                                                    <jms-queue name="BEEEInboundQueue" entries="jms/BEEEInboundQueue jboss/exported/jms/queue/BEEEInboundQueue"/>

                                                    <connection-factory name="InVmConnectionFactory" entries="/ConnectionFactory" connectors="in-vm"/>

                                                    <connection-factory name="RemoteConnectionFactory" entries="jms/RemoteConnectionFactory jboss/exported/jms/RemoteConnectionFactory" connectors="http-connector"/>

                                                    <pooled-connection-factory name="activemq-ra" transaction="xa" entries="/JmsXA jboss/DefaultJMSConnectionFactory" connectors="in-vm"/>

                                                </server>

                                            </subsystem>

                                 

                                 

                                <socket-binding-group name="ha-sockets" default-interface="public">

                                            <socket-binding name="ajp" port="8009"/>

                                            <socket-binding name="http" port="8080"/>

                                            <socket-binding name="https" port="8443"/>

                                            <socket-binding name="jgroups-mping" interface="private" port="0" multicast-address="${jboss.default.multicast.address:230.0.0.4}" multicast-port="45700"/>

                                            <socket-binding name="jgroups-tcp" interface="private" port="7600"/>

                                            <socket-binding name="jgroups-tcp-fd" interface="private" port="57600"/>

                                            <socket-binding name="jgroups-udp" interface="private" port="55200" multicast-address="${jboss.default.multicast.address:230.0.0.4}" multicast-port="45688"/>

                                            <socket-binding name="jgroups-udp-fd" interface="private" port="54200"/>

                                            <socket-binding name="modcluster" port="0" multicast-address="224.0.1.105" multicast-port="23364"/>

                                            <socket-binding name="txn-recovery-environment" port="4712"/>

                                            <socket-binding name="txn-status-manager" port="4712"/>

                                            <socket-binding name="management-http" port="9990"/>

                                            <socket-binding name="management-https" port="9993"/>

                                            <socket-binding name="iiop" port="4528"/>

                                            <socket-binding name="iiop-ssl" port="4529"/>

                                            <outbound-socket-binding name="mail-smtp">

                                                <remote-destination host="localhost" port="25"/>

                                            </outbound-socket-binding>

                                            <outbound-socket-binding name="my-http1">

                                                <remote-destination host="10.76.80.29" port="${jboss.http.port:6080}"/>

                                            </outbound-socket-binding>

                                            <outbound-socket-binding name="my-http2">

                                                <remote-destination host="10.76.80.30" port="${jboss.http.port:6080}"/>

                                            </outbound-socket-binding>

                                 

                                        </socket-binding-group>

                                 

                                 

                                 

                                Let me know if this configuration is right.

                                 

                                Thanks,

                                Preeta

                                • 13. Re: JMS Clustering with Wildfly-10.1 application
                                  jbertram

                                  I'm not clear on which server this configuration is for.  Is this for the cluster nodes themselves or for the server hosting the application connecting to the cluster?  If the former then you're missing some configuration details to actually make the cluster work.  I'd recommend you look at the messaging configuration in standalone-full-ha.xml for details on what configuration is required to make a cluster function.

                                   

                                  In any event, you can't have 2 connectors with the same name (e.g. "http-connector").  Each connector has to have a unique name.

                                   

                                  Lastly, this really isn't related to how JNDI lookups work.  It's related to what a JNDI lookup might return, but now how the lookup would actually make a connection.

                                  • 14. Re: JMS Clustering with Wildfly-10.1 application
                                    kpreeta12

                                    Hi Justin,

                                     

                                    The below lines in BOLD are different in standalone-full-ha.xml . Other lines in messaging system are same as standalone-full.xml.

                                     

                                     

                                    <subsystem xmlns="urn:jboss:domain:messaging-activemq:1.0">

                                                <server name="default">

                                                    <cluster password="${jboss.messaging.cluster.password:CHANGE ME!!}"/>

                                                   .........

                                     

                                                    <broadcast-group name="bg-group1" jgroups-channel="activemq-cluster" connectors="http-connector"/>

                                                    <discovery-group name="dg-group1" jgroups-channel="activemq-cluster"/>

                                                    <cluster-connection name="my-cluster" address="jms" connector-name="http-connector" discovery-group="dg-group1"/>

                                                    <jms-queue name="ExpiryQueue" entries="java:/jms/queue/ExpiryQueue"/>

                                                    <jms-queue name="DLQ" entries="java:/jms/queue/DLQ"/>

                                                    <connection-factory name="InVmConnectionFactory" entries="java:/ConnectionFactory" connectors="in-vm"/>

                                                    <connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="http-connector" ha="true" block-on-acknowledge="true" reconnect-attempts="-1"/>               

                                                </server>

                                            </subsystem>

                                     

                                     

                                    Now to answer your question, all this configuration is for the Cluster that has only the JMS queues configured and not the application. The application that is remotely connecting to this cluster would need to have the below  besides the lookup url being - JndiProviderUrl=http-remoting://10.76.80.29:6080, http-remoting://10.76.80.30:6080

                                     

                                    <http-connector name="http-connector1" endpoint="http-acceptor" socket-binding="my-http1"/>

                                     

                                    <outbound-socket-binding name="my-http1">

                                                    <remote-destination host="10.76.80.29" port="${jboss.http.port:6080}"/>

                                                </outbound-socket-binding>

                                     

                                    <http-connector name="http-connector2" endpoint="http-acceptor" socket-binding="my-http2"/>

                                     

                                       <outbound-socket-binding name="my-http2">

                                                    <remote-destination host="10.76.80.30" port="${jboss.http.port:6080}"/>

                                                </outbound-socket-binding>

                                     

                                     

                                     

                                    Thanks,

                                    Preeta

                                    1 2 Previous Next