I have never tried anything like this but if I was going to I would add a new partition (Possible called 'WindowsPartition') on each of the windows nodes and deploy a new farm-service.xml that searched a different folder and only deployed to the 'WindowsPartition'.
darranl's solution is the good solution
> I have never tried anything like this but if I was
> going to I would add a new partition (Possible called
> 'WindowsPartition') on each of the windows nodes and
> deploy a new farm-service.xml that searched a
> different folder and only deployed to the
If I understand you correctly, you're suggesting running two partitions on the same node? Is this possible? Can both partitions utilize the same cluster-aware JNDI tree? The reason I ask this question is because there are components of my app that use the cluster-wide JNDI to locate other components. For example, a HASingleton-based JMX component registers itself in the JNDI tree whenever it is started as a master, and other components of the system utilize the JNDI tree to locate this singleton instance.
There is no problem with running two partitions on the same node (I have done this myself before but for a different reason).
The purpose of the new cluster is just for the purpose of selective farming, your application can still continue to use the original cluster.
> The purpose of the new cluster is just for the
> purpose of selective farming, your application can
> still continue to use the original cluster.
OK, just to make sure I've got this right...
1. Set up a second partition only on the Windows nodes.
2. Create a farm-service.xml for this new partition that farms to/from a new directory.
Is this correct? If so, then should I modify the original cluster config to allow for hot deployment from the new farmed directory being managed by the second cluster? I'm thinking that this will allow the original cluster to automatically deploy/undeploy these newly-farmed components, allowing the components to access the HA-JNDI tree and utilize the original cluster's classloader.
Is this right, or am I off-base on this?
Yes the steps that you describe are correct.
Might need a bit more input from Sacha on the following comments but as far as I am aware you will not need to make any changes to the original cluster configuration.
Looking at the default farm-service.xml there is a dependancy on the main deployer.
I think that this means that this is the deployer used to perform the actual deployment of the items, the farm service is just there to distribute the items and to ask the main deployer to deploy them.
In the meantime I would just give these steps a go and see how well it works.
Yes, Darran's steps are correct. Try and tell use where it goes.
It's been a while since we last spoke about this, but I believe I have found an interesting solution that does not require the creation of a separate partition. Basically, I modified the farm-service.xml file on the Windows node to include an additional deployment URL:
<attribute name="URLs"> farm/,farm_windows/ </attribute>
This additional URL is only specified in the farm-service.xml on the Windows node, so the Unix nodes have no idea that this URL even exists. I partially tested this by creating the farm_windows directory on my Windows node and placing one of my Windows-specific deployment files there. I started the Windows node and verified that it deployed everything, including the item in the farm_windows directory. I then started the Unix node and watched to see what was farmed and deployed. Lo and behold, only those items in the farm directory on the Windows node were farmed and deployed to the Unix node. I have not yet tested this with a second Windows node, as my server cluster does not currently have another Windows machine, but I suspect that it will work as expected and farm all of the deployments, including those in the farm_windows directory, so long as the farm-service.xml on the Windows node specifies farm_windows in the URL list.