In most distributed computing architecture, we can not avoid programming directly with byte information. Like Serialization/DeSerialization, File reading/changing, etc. So all of us need some solution resolve the memory allocation/deallocation effiently with good performance numbers and acceptable overhead.
So here I introduce the memory pool design. It's underlying depends on the ByteBuff of java nio.
1. Provide the estimated memory block consumptions.
We assumed all of the request for memory requirement can be categorized by many blocks[segment]. The segment could like 64 bytes, 128 bytes, 256 bytes, 512 bytes, etc. And then we could define the pool initial size and final size, like init=2 * 1024 bytes and end= 16 * 1024 bytes. And pool will dispatch the request of memory allocation to the segment that can fullfil the request.
The segment will be 2*1024, 4*1024, 8*1024, 16*1024
The request need allocate 3456 bytes, so the segment contain 4*1024 can fullfil the request.
2. Direct ByteBuffer or Heap ByteBuffer
I decided support both of them, as I thought the decision by the application designer. Due to the direct bytebuffer allocation slow than heap bytebuffer, but read/write against direct bytebuffer better than read/write against heap bytebuffer - Of course, all of those comments means generally in average situations.
Btw, if you decide using the direct bytebuffer. Please remember set pool initialization as earlier as possbile than your application codes.