If you are using Java8, one thing you can do is determine where the memory is being used by tracking the native memory with
Then you can periodically inspect it with:
$ jcmd <pid> VM.native_memory summary
This gives you insight into memory used by threads, classes, heap, GC, etc.
I found that WF 10 started an excessive number of threads based on the number of CPUs it detected.
How do you measure the memory consumption? Which platform you're using (Windows, Linux, Solaris, ...)?
As max heap and max metaspace are set to 1400m and 320m the process needs to reserve ~1,7 GB of memory.
But this does not mean it actually allocated that much.
For performance reason I would use "-Xms == -Xmx", that will alocate the complete memory and prevent from recalculating.
The JMV might decide to allocate memory between min and max, but you need to enable the GC logging to see what the JVM really use.
Also you should notice that the JVM consume -Xmx + -MaxMetaSpace + ThreadStack + internal areas. So the allocated memory is a bit more than what the parameters are.
I use a Linux distribution Amazon EC2 supply. After I started the Wildfly, I run 'top' command and java is the top of them which consumes 1.7gb of memory as I mentioned on the question. I have 2gb of memory in total so it gives me so much trouble. I would think Wildly will consume 1.7gb of memory if necassary but it takes all of them at start. I can start Wildfly with 1 gb of Memory in total by changing the parameters but in some cases that may not be enough. This is the reason why I give max 1.7 gb of ram in total. Is there someting I can do what I want.
Did you solve this problem? I'm facing a similar issue deploying Wildfly 10.1.0 on Centos7 docker containers. Memory issues are letting the kernel kill wildfly service.
You can use tools like jmap (which comes bundled within the Java SDK) to get the memory usage details of the process and then analyze it to see what is consuming that memory.