2 Replies Latest reply on Jul 4, 2006 7:16 AM by richard.powell

    Matching thread numbers to thread names

    richard.powell

      Is it possible match the thread numbers shown by the system tools to the thread names shown by the jboss tools.

      We are suffering from poor system performance and have traced this to a number of threads running in the JBOSS/Java process.

      The thread numbers can be seen seen using the command:

      prstat -n 10 -L

      The thread names can be seen using the command:

      twiddle.sh invoke "jboss.system:type=ServerInfo" listThreadDump

      However, I cannot tie the numbers to the names. If I could, then I should be able to work out what is busy.

      We are running JBoss 3.2.7 under JDK 1.4.2. The server is an FSC Sparc with 8Gb memory and 4 CPUs running Solaris 8.

      Is there anything that can be twiddle'd to return the extra information ?

      Thanks

      Richard

        • 1. Re: Matching thread numbers to thread names
          jaikiran

          Usually in these cases, we obtain a thread dump, which lists all the current threads in jboss and the activities being carried out by them. Here's how it's done:

          http://www.jboss.org/wiki/Wiki.jsp?page=StackTrace

          See if it helps.

          • 2. Re: Matching thread numbers to thread names
            richard.powell

            Thanks for the response and apologies for my late reply.

            The kill -3 worked fine and we managed to get the thread/stack trace.

            Initially we had some problems relating the threads to the busy ones on the system. In many cases, the thread did not exist in the listing. However, by subtracting 1 from the LWP number the unix prstat command gave us, we could match the NID entry in the thread trace. This consistantly showed the same piece of code was causing the problem.

            The final cause of the problem turned out to be the generated sql that was multiple rows from a table (instead of one). As the system has been used, this table has grown and we obviously hit a critical point and system performance dropped off.

            We managed to reproduce this in our test environment and improved performance by flushing old entries from the table. The root cause still exists but at least we know how to manage it.

            Thanks for you help

            Richard