I think you should not be concerned that much about split brains if Infinispan is not the source of the data, i.e. you can reconstruct data from your primary datasource (your database).
If you are using HotRod, it can be secured using SASL and/or SSL, so we support client-certificates, username/password and Kerberos. If you are using embedded you will have to provide a JAAS Subject yourself and use the appropriate security APIs. Look at the blog posts: http://blog.infinispan.org/2014/04/infinispan-security-1-authorization.html and http://blog.infinispan.org/2014/07/infinispan-security-2-authorization.html for more information.
I think I will rather run a standalone instance on each server. There is more than one component that builds the webservices so I will run each webservice component in a separate jvm and Infinispan alongside.
Reconstruction is not my problem, I "fear" that I might not be able to get any data from the cache because I read (also on the blog) that it will block access if it considers the data stale. Serving "outdated" would be a minor problem (or better than not serving anything ;-)) as long as it gets updated when both nodes are up again. Will Infinispan sync changes that were made after one node went offline ?
Partition handling is not enabled by default, therefore, unless you enable it the behaviour will be the same as in previous versions where the access is not blocked.
When the nodes get up again (meaning that one of the nodes is really restarted), the data will be synchronized. However, without partition handling, if the nodes will just 'think' that the other node is down (due to network split, or even long GC), after merging those two together there won't be any synchronization and the data on one node could stay stale (this scenario is not tested very well, so I wouldn't surprised if some data got lost completely during the merge).
Assuming you use either replicated mode or distributed mode with numOwners = 2, data will not be lost when one node crashes / shuts down.
Radim is right, data can be lost if there are more nodes and after a (false) split brain one of the partitions ends up with zero owners for some of the keys. But in your case, there's only one way to split the cluster and both partitions will have all the keys, so you're safe.