If you rely on monitoring a directory for bundles to deploy, there will always be a chance to run into a race condition (detecting bundles half way through a copy), with maybe the exception of creating a sym link to a folder inside the deployment directory, an atomic file system operation, which is a nasty workaround.
When using an *.ear file as the deployment artifact, doesn't this go against the OSGi mentality? How am I to update a single nested bundle dynamically, without redeploying the whole ear?
I think the only options are to support hot bundle deployment that refreshes wires for every affected bundle, just like the behaviour was in JBoss AS 7.1 and as it is in other Felix and Equinox based products. And/Or support deployment descriptors like Apache Karaf does via features.xml. I haven't used Karaf expensively, so I cannot comment, but I wonder if a modification to a features.xml requires total redeployment of all the listed bundles just like dropping in an updated ear would?
I'm not quite sure I understand what you mean by it is unsafe to deploy subsets of bundles? Apache Felix and Eclipse Equinox seem to manage this without any issues, or at least that I have encountered.
Maybe you mean that some bundles could be wired to old versions of a bundle and others wired to newer versions of a bundle, for no other reason then their deployment order? This to me seems natural.. And shouldn't result in any nasty ClassCastException's.. OSGi will always try and wire the lowest common demoninator, where possible, in the face of multiple versions, to avoid issues like CCE's...
Can you please give more specific examples of issues this approach may cause?