5 Replies Latest reply on Feb 25, 2011 7:46 AM by xandrewrampulla

    Thread.yield consuming a lot of CPU

    xandrewrampulla

      When using HornetQ, JProfiler is complaining that HornetQ's usage of Thread.yield is consuming a large amount of our CPU.  I started looking at the code in RemotingConnectionImpl.bufferReceived and noticed some code that bothered me.  It appears that this code is sitting in a very tight loop doing Thread.yield until a packet is fully received.  This seems bad to me.  I would have expected a wait/notify mechanism for this to minimize the impact on the rest of the system.  I am not familiar with this internals of HornetQ at all, so I would appreciate it if someone else who is familiar can do a quick code review of this area to see if my concerns are unwarranted.

       

      Thanks.

        • 1. Thread.yield consuming a lot of CPU
          clebert.suconic

          That's just to avoid the pings never returning to the client in case the CPU is over-used.

           

          I don't believe the yield is really consuming a lot of CPU. It's probably just the weight of the profile itself since yield is a very short execution method.

           

           

           

          What's the size of your thread pool BTW? I just found an issue with high CPU usage in case of the pool full.

           

           

           

          Also: I'm moving it to user's forum. I don't see any development activity here at this point.

          • 2. Thread.yield consuming a lot of CPU
            xandrewrampulla

            20 I believe (assuming this is the right configuration file)

            jms-ds.xml

            <tx-connection-factory>

                  <jndi-name>JmsXA</jndi-name>

                  <xa-transaction/>

                  <rar-name>jms-ra.rar</rar-name>

                  <connection-definition>org.hornetq.ra.HornetQRAConnectionFactory</connection-definition>

                  <config-property name="SessionDefaultType" type="java.lang.String">javax.jms.Topic</config-property>

                  <config-property name="JmsProviderAdapterJNDI" type="java.lang.String">java:/DefaultJMSProvider</config-property>

                  <max-pool-size>20</max-pool-size>

                  <security-domain-and-application>JmsXARealm</security-domain-and-application>

               </tx-connection-factory>

             

            The yield could be consuming a large amount of CPU just in the rapid context switching and even the minimal code that it takes to do this yield could be contributing to a high CPU load when you consider that this is running in a very tight loop that is probably executing a zillion times.

            • 3. Thread.yield consuming a lot of CPU
              ataylor

              from your configuration its obvious that this may not be hornetq, your using JCA which complicates matters.

               

              Also, since this is typically invm I would have thought that that piece of code wpould not get called much since buffers are passed in there whole not in parts, saying that, its hard to comment, maybe you could explain your topology, or even better, provide something that we could reproduce.

              • 4. Thread.yield consuming a lot of CPU
                xandrewrampulla

                Sadly I'm not going to be able to get back to this for a few days.  I'll post an example once I get some free time to build one.

                • 5. Thread.yield consuming a lot of CPU
                  xandrewrampulla

                  I was able to do an experiment (although I cannot create a reproducible test case for you since it was on our existing system).  When I rebuild the hornetq-core-client-java5.jar, and replaced it with my own, we did see that the RemotingConnectionImpl no longer showed up in the profiling results, which indicated that this does contribute. 

                   

                  Also someone else on the team used byteman to inject some code into the RemotingConnectionImpl.bufferReceived method and found that Thread.yield was being called 400 times per second.

                   

                  I believe the following changes are a much more efficient way to do this wait.

                   

                  Add a new member variable for locking

                     private final Object executingLock = new Object();

                   

                  In the part that does the notification

                                 public void run()

                                 {

                                    try

                                    {

                                       doBufferReceived(packet);

                                    }

                                    catch (Throwable t)

                                    {

                                       RemotingConnectionImpl.log.error("Unexpected error", t);

                                    }

                    

                                    executing = false;

                                    synchronized(executingLock) {

                                      executingLock.notifyAll();

                                    }

                                 }

                   

                  In the part that does Thread.yield

                              //To prevent out of order execution if interleaving sync and async operations on same connection

                              while (executing)

                              {

                  //                Thread.yield();

                                  synchronized(executingLock) {

                                      try {executingLock.wait(500);} catch(InterruptedException e) {}               

                                  }

                              }

                   

                  Is this something that could be integrated into the source in the next release?