100GB will take time
How is your cache configured? Do you try to keep the 100GB in-memory for the test?
Maybe a async put will help or you use multiple instances.
- How are you loading the data, in a single thread or in multi-threaded fashion ?
- How many cluster nodes are you using ? If more than 1, you can query disjoint sets of data on the database from each node and load it into the cache. I recommend using a distributed cache for this approach.
- As Wolf-Dieter suggested you could use putAsync to speed up your load
Either way 100 GB will take time, at the source (database) + processing the relational data into objects + while loading the cache.
another point :
what is the best way to feed data into Datagrid Server ?
I have a cluster of Datagrid Server, and I will receive through the system thousands of events every minutes
Is it better to user a remote HotRod Client receiving data from the system and pushing them to the Grid ?
Could you tell me what is the source of those events? But in general yes, using a Hot Rod client might be a good fit.
Sources of events might be
- MQ or JMS queue
- Apache Spark
1 of 1 people found this helpful
I'm assuming all 3 solutions allow you to stream data (and not load everything at once). In that case a small, Hot Rod Client based app would be a perfect fit.