I have 2 application in production on RHEL 4 with jboss 4.2.2.ga and seam 2.0.x
and have no issues like that. Apps are running with more than 2 months uptime(till now), only downtime is publishing new version of app.
When this error occurs check what what handles are open. If you are running server on linux you can get that with
lsofcommand on windows google for
handlecommand prompt application.
When you see who is keeping handles or see what files are opened you can narrow problem down to you application or to jbas or seam.
But in any case post your results.
I don't have this issues but part of things that might prevent from this is to reduce the scanning frequency of the deployment directory.
By default JBoss is configured 'optimized' for development and not production. I guess you need to buy the 'total package' from Red Hat to get production-optimized profile, however, I'm not the best qualified to really comment on this one.
However, JBoss wiki has/had a page with tips&tricks to fine tune JBoss profile (sorry got no time to look it up right now, may be I'll come back to this later).
We had similar problems. They were only showing on the Linux setups of my colleagues but not on my XP setup. I realized that the number of files allowed to open simultaneously on their Ubuntu installations was rather low, only 1024. I sent them the following (which I frankly don't know the details of - I am not a Linux guy - but I do know how to google) which apparently made them happy: ulimit -n 10000
So in short: Try increasing this if it is too low on your machine.
I'll let you know.....this sounds good. Also Tomaz response lead me in the right direction too.
So, anyone can try this in their Linux shell. List your processes, and find the main pid for jboss using jps (java process).
#jps -l 32218 sun.tools.jps.Jps 280 org.jboss.Main
Then put that pid onto lsof (list of open files)
# lsof -p 280 | wc -l
you will see the number of open files rise dramatically depending on the popularity of your server. I can tell ya, I have been looking at this for a while....that number ain't going down. Resources are not being closed...BAD JBOSS (not Seam) PROGRAMMERS!
Now you can see the limit by doing a
# sysctl -a
and look for the fs.file-max key the number of open files that can be opened and maintained by the Linux kernel. That I believe is the overall limit of open files (can someone verify?).
So what you have is a ticking time bomb, the number of files, very slowly/rapidly will increase by jboss along with other well-behaved processes until fs.file-max is reached, at which time, I can only speculate, Linux will just say screw it and start killing off old files which is why you will see a 5-6 minute downtime.