Clustered DeploymentRepository for 5.1
brian.stansberry Feb 9, 2009 3:01 PMI'll be posting here some FYI stuff re: what I'm doing to get a clustered impl of a DeploymentRepository working, specifically for 5.1.
There's been a lot of demand for a Farm service replacement, so my 5.1 target is to restore the equivalent functionality. In simplified terms that boils down to:
1) In DeploymentRepository.load() reconcile the local repo with the cluster. Equivalent to what the old FarmMemberService did in startService().
2) In DeploymentRepository.getModifiedDeployments() detect changes in the local repo and push them out to the cluster before returning. Equivalent to what the old FarmMemberService did during the periodic scan() calls.
The former is generally useful even without the latter. E.g. imagine users wanted Jopr and the DeploymentManager to handle pushing changes out to the cluster. So #2 wouldn't be wanted. But #1 is still useful to allow a node that's been offline to sync up with the cluster when it starts.
The fundamental thing that needs to be there for this to work is an intra-cluster communication facility available to the clustered DeploymentRepository. Some day an advanced clustered ProfileService could have a JGroups channel available to it as part of bootstrap, but for the 5.1 requirements I don't need or want to go that far. Instead I'm looking to deal with this by using subprofile dependencies. Basically a "farm" subprofile depends on the subprofile that includes "deploy". Result is that by the time load() is called on a ClusteredDeploymentRepository, the "deploy" subprofile has been installed, so the HAPartition is available in the runtime. The ClusteredDeploymentRepository then uses a service locator class to resolve the HAPartition.
I've got that working on my trunk checkout. Following is how it works in a static setup, which I believe is what we are targetting for 5.1. The equivalent should work fine with the XML based stuff Emanuel's doing as well; just have to ensure the "farm" subprofile is installed after whatever subprofile deploys the HAPartition.
public class StaticClusteredProfileFactory extends StaticProfileFactory { .... @Override protected void createProfileMetaData(ProfileKey rootKey, URL url) throws Exception { if(rootKey == null) throw new IllegalArgumentException("Null root profile key."); // Create bootstrap profile meta data ProfileKey bootstrapKey = new ProfileKey(BOOTSTRAP_NAME); ProfileMetaData bootstrap = createProfileMetaData( BOOTSTRAP_NAME, false, new URI[] { getBootstrapURI() }, new String[0]); addProfile(bootstrapKey, bootstrap); // Create deployers profile meta data ProfileKey deployersKey = new ProfileKey(DEPLOYERS_NAME); ProfileMetaData deployers = createProfileMetaData( DEPLOYERS_NAME, false, new URI[] { getDeployersURI() }, new String[] { BOOTSTRAP_NAME }); addProfile(deployersKey, deployers); // Create applications profile; ProfileKey applicationsKey = new ProfileKey(APPLICATIONS_NAME); URI[] deployURIs = getApplicationURIs().toArray(new URI[getApplicationURIs().size()]); String[] applicationSubProfiles = new String[] { BOOTSTRAP_NAME, DEPLOYERS_NAME }; ProfileMetaData applications = createProfileMetaData( APPLICATIONS_NAME, true, deployURIs, applicationSubProfiles); addProfile(applicationsKey, applications); ProfileMetaData farm = null; if (getFarmURIs() != null) { ProfileKey farmKey = new ProfileKey(FARM_NAME); URI[] farmURIs = getFarmURIs().toArray(new URI[getFarmURIs().size()]); String[] farmSubProfiles = new String[] { APPLICATIONS_NAME }; farm = createClusteredProfileMetaData( FARM_NAME, true, farmURIs, farmSubProfiles, getPartitionName()); addProfile(farmKey, farm); } String[] rootSubProfiles = farm == null ? new String[]{APPLICATIONS_NAME} : new String[] { FARM_NAME }; ProfileMetaData root = createProfileMetaData( rootKey.getName(), true, new URI[0], rootSubProfiles); addProfile(rootKey, root); } protected ProfileSourceMetaData createClusteredSource(URI[] uris, boolean hotDeployment) { ClusteredProfileSourceMetaData source = null; if(hotDeployment) { source = new HotDeploymentClusteredProfileSourceMetaData(); } else { source = new ImmutableClusteredProfileSourceMetaData(); } source.setPartitionName(getPartitionName()); List<String> sources = new ArrayList<String>(); for(URI uri : uris) sources.add(uri.toString()); source.setSources(sources); return source; } ... }
Now that I know the dependency issues can be handled easily enough I'll shift back to focusing on the internal details of keeping the local repositories in sync.