Horizontal scalability is very important for us so we wanted to test that first to make sure it scales.
First I run a test with single server to find out maximum throughtput (measured at sending end).
Test scenario is:
- 40 producers
- 40 consumers
- 1kB message size
- One queue
- Persistant messages
- Not transacted
- AUTO_ACKNOWLEDGE mode
- No optimisations such as disabling message ID or timestamp
Client is running 40 threads so each thread has one producer and one consumer (message listener). Only one connection is created and used to create sessions for each thread. This ensures that producers and consumers and spread equally (round robin) on all servers.
Configuration is using multicast discovery and the only thing modified in the config was journal-min-files was set to 100.
Client code and server configuration are attached. Four physical nodes were used. Two for clients and two for servers.
Test scenario1: one server and one client
Maximum througput was ~8700 msgs/sec
Test scenario2: two servers and two clients
Expected to see that maximum throughput almost double but actual throughput was only ~10000 msgs/sec
I used JMX to verify that producers and consumers spread equally (40 producers and consumers on each server). I also tried to run each server separately and performance was fine.
Can you please advice on what else can I check or what did I configure wrong?
Your help is appreciated.