yes indeed we have only thought about read operations, wich accumulate some result and return it to the (remote) invoker.
Your approach is definitely interesting and I think we could make it work in a transaction as well, so that each node prepares the updates, but committing only if all nodes succeed. This is not implemented though.
Don't you think this should be provided by the DistributedExecutorService rather than the Map/Reduce API?
It makes sense for the DistributedExecutorService to be provided with some context, like for example the cache and maybe more as you suggest. Ideally the remoted Callable should receive services by injection via CDI.. would that work for you?
Thx for your quick answer, see below my answers.
>> Don't you think this should be provided by the DistributedExecutorService rather than the Map/Reduce API?
<< What I like with the Map/Reduce approach is the fact that the local node is the owner of the key/value pairs given to the mapper. In case of DistributedExecutorService, it seems that I would have to reimplement the filter that we have in MapReduceCommand.perform that allows to keep only key/value pairs owned by the local node which I would like to avoid as it is an internal logic. Moreover in products like Hadoop and Hbase, we have the ability to do RW operations thanks to a Map/Reduce (using the OutputFormat) so why not having it in ISPN too? It would be awsome don't you agree?
>> receive services by injection via CDI.. would that work for you?
<< That would be perfect
1 of 1 people found this helpful
Nicolas, if you use DistributedExecutorService submitEverywhere and submit method with input keys then Callable task(s) will be executed only on nodes where input keys are local!
Thx Vladimir for your remark, I did not realize it as I only checked what matches most with my use case which is actually without providing a set of keys. In my use case, I don't know the entries to modify and I have no idea how many entries will be modified, I need to iterate over all the keys to know which one will be modified that's why the Map/Reduce approach with an access to the cache would be perfect for me.
@Sanne IMHO if you don't want to have to modify the API now (which makes sense as ISPN 5.1 is already a CR) you had better to do ISPN-1636 directly for ISPN 5.1 and don't do ISPN-1634, don't you agree?
Moreover, maybe I'm naive but it looks like it is not so hard to implement, we need to add the ComponentRegistry in the init method of the class MapReduceCommand (which also means that CommandsFactoryImpl and MapReduceTask must be modified to add this new parameter) then call componentRegistry.wireDependencies(object) on the mapper and the reducer in the perform method of MapReduceCommand, don't you agree?