Maybe your application has a infinite loop somewhere? What objects are being added to PermGen? Obtain a report using jstack and see where its stuck:
Also, your AS version is pretty old, upgrade to newer.
>>From the Jprofiler monitoring, it shows the PermGen memory area is getting filled up
I rather doubt that this is true. The permgen always runs at 99% full; seeing that is not a cause for alarm. If there was not enough room to add addtional classes, you would get an OOME.
Radoslav's recommentation is the best: take a Java thread dump (not a Linux thread dump) and examine the stack traces for the threads that are "stuck".
We dont seem to be getting any of the usual problems like OOM, errors in log files etc.
It may quite be possible that the databse/data is eating up memory due to bad queries. Any way to validate or monitor memory usage from database using some tool?
There is no separate database memory (well, there is on the database server but not on your application server). All database data will show up in the heap.
What did the thread dump tell you? The answer for a hanging problem is in there!
So what we understand is that the app server is not really getting hanged. But as time passes that application becomes painstakingly slow and it "looks" like as if the server hanged, so the users are just restarting the server.
Ill try and get you the results of the thread dump from my tech team. Meanwhile, do you still think it is a java problem and not a databse problem?
That could be caused by one of two problems:
a) You have one or more requests that are "hanging". When they hang, the user might close the browser and try again. For example, there might be an infinite loop in the code which from that point on consumes processing power (and memory, but that usually is not too significant, but that all depends on the app). Or there might be some other haning issue such as a database deadlock. A thread dump will find this. (I have had people swear that there was no such issue in their code, yet while the system was idle - no requests coming in - the CPU was running at some high percentage. A thread dump showed exactly where the loop was.) Eventually, all of the threads in the http request thread pool get used up any no-one can get on the system any more. (Just yesterday I helped some colleagues find such an issue in their portal code using a thread dump.)
b) You have a memory leak. Eventually the heap fills up to the point that the JVM has to constantly take full garbage collections. You can verify this by turning on one of the GC logging options (-verbose:gc should be enough to catch a heap problem, also us -Xloggc:xxx to log to a file). One way to find a memory leak is to attach JVisualVM to the process and use it to take heap snapshots about 10 or 15 minutes apart while the system is busy. Then have it compare the heap snapshots and show you the differences. That should tell you which object(s) have increased in numbers.
I have the result of the dump. Let me know I can share it.
try to check memory allocation on jConsole, additionally analyze the application characteristics to determine what type of objects are created based on targeted/ specific test cases.