That's hardly any information for us to help you. Since this appears to be mainly about how your application works, you might want to profile the application and see if you can find something. By the way, did you really mean 8MB of heap space?
I've profiled my application... heap memory, perm gen memory, threads, cpu, etc... nothing "strange".
Yes, i know its very exagerated 8M of heap space, it was just a test to see if the problem was related with memory.
I could explain more about this web service:
- Its a web service to responde to a web site that sells images.
- To know which images to response, we use SolrJ to communicate with solr wich gives us paths to the images to show on the web site.
- The web service, reads FROM DISK, for each request with aboutn 20 images in memory, convert them to base64 and responds to the web site, so you can see the size of each response.
My "feeling", for what it worths, is that the problem its related with volume of strings in each request/response.
It works in weblogic with jrockit java 6, but in jboss 7.1.1 with java 7 just works for a few hours... If the problem was memory ou class loading leaks I suppose the problem will arise in weblogic too, but in weblogic it really works smoothly without any issue. (weblogic 10.3.3 - which by the way has some bugs).
Errors I found:
- Perm Gen OutOfMemory in some cases (even with 2M!!!)
- CPU at 100% in the 4 cores but with a very low traffic.
- The jboss web console stops responding, but a telnet to the 8080 port stills responds... (it seems like a livelock or something similar)
- the jmx data in the web some reports negative numbers???? Very strange...
Anyone from jboss development team could help me resolve this case, or someone with similar problem?
We are really trying to migrate and use jboss, and probably when enterprise 6 edition comes out we could by support, but the experience makes me thing to try Glassfish for instance...
You mean 2G heap size or really 2M?
can you post your java_opts that get display on boot of server so we can see actual values used.
Below the Java_opts (in the last intervention I tried to use: (-XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled -XX:+UseConcMarkSweepGC) to resolve the perm gen outofmemory. But the result was CPU at 100%.JAVA_OPTS="-Xms2048m -Xmx8192m -XX:MaxPermSize=2048m -XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled -XX:+UseConcMarkSweepGC -Djava.net.preferIPv4Stack=true -Dorg.jboss.resolver.warning=true -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000"
JAVA_OPTS="$JAVA_OPTS -Djboss.modules.system.pkgs=$JBOSS_MODULES_SYSTEM_PKGS -Djava.awt.headless=true"
1 of 1 people found this helpful
it looks some kind of memmory leak somewhere.
can you try with EAP 6.1.0.Alpha (== 7.2.0.Final) that had some memmory leaks fixed.
but afaik none of them would manifest in this manner.
also did you try to profile app server with your application deployed?
i would recomend you yourkit, just connect to running server and do memmory snapshot (try doing full GC before)
that will show you where most of memmory is lost.
Also as goes for cpu, you can do CPU snapshot and it will show you where cpu is spent.
If problem is not enough memmory (for example there is memmory leak) CPU could be spent in garbage collector.
Otherwise it will show in profiler.
btw what is the JDK you are using? i would recommend JDK7 latest update.
IF you are using jdk7 before update 4, there was known issue with lucene (what solr uses behind scenes) that would cause infinite loops.
After I update to the latest jdk 1.7 the memory leak have vanished! Curious, the change log of jdk doesnt have any corrections related with this memory leak.