I am attempting to get a <rich:tree> with 30,000+ nodes to load with all of the items expanded up front, and have thus far been unsuccessful. I have tried NEKO and NONE xml parsers. I have tried all 3 switch types. Nothing provides enough of an efficiency boost to keep the browser/jvm from running out of memory.
I am able to load my tree just fine with all nodes collapsed from the start, but my business requirements preclude that option for me.
Does anyone have any recommendations for otherwise increasing the efficiciency of tree loading with this many nodes, or is it just not going to happen?