8 Replies Latest reply on Dec 9, 2015 11:08 PM by jbertram

    Limit HornetQ threads

    sumkale

      Hi,

      Creating separate discussion thread.

      Recently, observed high CPU utilization and when thread dump is analyzed, found around 1800 HornetQ live threads. Not sure if this is expected behavior.

      Are HornetQ threads contributing High CPU usage ?

       

      Here are more details

      Server : JBoss AS 7.2.0 Final

      HornetQ version : 2.3.0.CR1

      JDK version : 1.7u67

      Summary of thread dump analysis is :

      • GC threads are consuming most of the CPU
      • Almost all generations [EDEN, young and Permgen] are full
      • Around 1800 HornetQ threads are in parking state
      • hornetq-failure-check-thread is in BLOCKED state.

       

      Here is HornetQ Thread distribution from Thread dump

      Capture.JPG

       

      Server configuration is :

      <hornetq-server>

                      <persistence-enabled>true</persistence-enabled>

                      <security-enabled>false</security-enabled>

                      <journal-file-size>102400</journal-file-size>

                      <journal-min-files>2</journal-min-files>

                      <connectors>

                          <netty-connector name="netty" socket-binding="messaging"/>

                          <netty-connector name="netty-throughput" socket-binding="messaging-throughput">

                              <param key="batch-delay" value="50"/>

                          </netty-connector>

                          <in-vm-connector name="in-vm" server-id="0"/>

                      </connectors>

                      <acceptors>

                          <netty-acceptor name="netty" socket-binding="messaging"/>

                          <netty-acceptor name="netty-throughput" socket-binding="messaging-throughput">

                              <param key="batch-delay" value="50"/>

                              <param key="direct-deliver" value="false"/>

                          </netty-acceptor>

                          <in-vm-acceptor name="in-vm" server-id="0"/>

                      </acceptors>

                      <security-settings>

                          <security-setting match="#">

                              <permission type="send" roles="guest"/>

                              <permission type="consume" roles="guest"/>

                              <permission type="createNonDurableQueue" roles="guest"/>

                              <permission type="deleteNonDurableQueue" roles="guest"/>

                          </security-setting>

                      </security-settings>

                      <address-settings>

                          <address-setting match="#">

                              <dead-letter-address>jms.queue.DLQ</dead-letter-address>

                              <expiry-address>jms.queue.ExpiryQueue</expiry-address>

                              <redelivery-delay>0</redelivery-delay>

                              <max-size-bytes>10485760</max-size-bytes>

                              <address-full-policy>BLOCK</address-full-policy>

                              <message-counter-history-day-limit>10</message-counter-history-day-limit>

                          </address-setting>

                      </address-settings>

                      <jms-connection-factories>

                          <connection-factory name="InVmConnectionFactory">

                              <connectors>

                                  <connector-ref connector-name="in-vm"/>

                              </connectors>

                              <entries>

                                  <entry name="java:/ConnectionFactory"/>

                              </entries>

                              <connection-ttl>-1</connection-ttl>

                          </connection-factory>

                          <connection-factory name="RemoteConnectionFactory">

                              <connectors>

                                  <connector-ref connector-name="netty"/>

                              </connectors>

                              <entries>

                                  <entry name="java:/RemoteConnectionFactory"/>

                                  <entry name="java:jboss/exported/jms/RemoteConnectionFactory"/>

                              </entries>

                          </connection-factory>

                          <pooled-connection-factory name="hornetq-ra">

                              <transaction mode="local"/>

                              <connectors>

                                  <connector-ref connector-name="in-vm"/>

                              </connectors>

                              <entries>

                                  <entry name="java:/JmsXA"/>

                              </entries>

                          </pooled-connection-factory>

                      </jms-connection-factories>

        • 1. Re: Limit HornetQ threads
          jbertram

          How are you using JMS connections within the applications running on this server?  Are you making sure to use a pooled-connection-factory whenever you send a message?

          • 2. Re: Limit HornetQ threads
            sumkale

            Yes, its on same machine. We are using HornetQ as in memory queue.

            As per code, Producers are using "InVmConnectionFactory" connection pool, however MDBs ie consumers are using "pooled connection factory".

             

            Also, periodically can see below warning into logs :

            WARN [org.hornetq.core.client] (hornetq-failure-check-thread) HQ212107: Connection failure has been detected: HQ119034: Did not receive data from invm:0. It is likely the client has exited or crashed without closing its connection, or the network between the server and client has failed. You also might have configured connection-ttl and client-failure-check-period incorrectly. Please check user manual for more information. The connection will now be closed. [code=CONNECTION_TIMEDOUT]

            • 3. Re: Limit HornetQ threads
              jbertram

              As per code, Producers are using "InVmConnectionFactory" connection pool, however MDBs ie consumers are using "pooled connection factory".

              The "InVmConnectionFactory" (which is looked up using "java:/ConnectionFactory") is just a normal connection factory; it is not pooled.  Your producers should be looking up "java:/JmsXA" so that they use the "hornetq-ra" pooled-connection-factory.

               

              You should also ensure that all your clients are closing connections properly when they no longer need them.

              • 4. Re: Limit HornetQ threads
                sumkale

                Thanks Much for quick reply Justin.

                However, HornetQ doc states that

                "Please note that JMS connections, sessions, producers and consumers are designed to be re-used.

                It's an anti-pattern to create new connections, sessions, producers and consumers for each message you produce or consume. If you do this, your application will perform very poorly."

                 

                Am confused ....

                • 5. Re: Limit HornetQ threads
                  jbertram

                  The HornetQ documentation is correct.  That is why you should be using a pooled-connection-factory like the one available at "java:/JmsXA" so that the connections (i.e. the "heaviest" JMS object) are re-used.  When you "open" and "close" a connection from a pooled-connection-factory the physical connection is not actually opened or closed respectively in most cases. You are typically just getting a connection from the pool and returning that connection back to the pool.  My hunch is that you're not handling your connections properly which is why you're seeing so many threads.

                  1 of 1 people found this helpful
                  • 6. Re: Limit HornetQ threads
                    jbertram

                    To be clear, a pooled-connection-factory is only available in an application server like Wildfly.  It's not available in standalone HornetQ.

                    • 7. Re: Limit HornetQ threads
                      sumkale

                      With HornetQ document consideration, is it safe to assume that one should not close the connection, session at code level as it is handled by pool ?

                      Also, is "<connection-ttl>-1</connection-ttl>" causing high number of threads to be created as it is instructing NOT to check for dead threads and do not reclaim them.

                      • 8. Re: Limit HornetQ threads
                        jbertram

                        With HornetQ document consideration, is it safe to assume that one should not close the connection, session at code level as it is handled by pool ?

                        The pool does not magically close connections.  It's just a pool.  The application code acquires connection from the pool and returns connections to the pool by opening and closing them.  If you want your application to hang on to a connection from the pool that's fine, but you must realize that the connection will not be available to other components for use even when it's idle which kind of defeats the whole purpose of using a pool in the first place.

                         

                        Also, is "<connection-ttl>-1</connection-ttl>" causing high number of threads to be created as it is instructing NOT to check for dead threads and do not reclaim them.

                        No.  No connection-ttl value causes threads to be created.  I imagine your application is responsible for that.  The connection-ttl is basically a fail-safe for the broker for when clients don't handle their connections properly or if a remote client's connection dies before it has a chance to close its connection.  The connection-ttl is typically set to -1 for in-vm connections because it's assumed that 1) applications will handle their connections properly and 2) there's no risk of network failure since there's no network involved in an in-vm connection.

                        1 of 1 people found this helpful