So with GateIn 3.2 in sight and the release of GateIn 3.2 CR1, I figured now's as good a time as any to write a little about the new management component and more specifically the Command Line Interface (CLI) responsible for handling the execution of management operations using commands. But before we jump into the CLI, let's discuss a little bit about the goals of the management component, which is covered in more detail in the community docs.
Extensibility
All management resources and operations are built by creating management extensions. The management extension is the entry point into the management component responsible for registering resources and operations. This means that any resource or operation we include in GateIn confines to the same SPI that someone else would use if they were to customize and extend the management capabilities of GateIn. This allows us (GateIn) to continually add extensions to support management needs, but hopefully at the same time inviting others to contribute or be able to add their own customization's.
Interface Agnostic
A management extension is completely unaware of the interfaces in which it's exposed over. This allows the extension to focus on the "management" logic, rather then have to deal with things like REST/HTTP and CLI syntax. In other words, by confining to a Java SPI to create an extension, the resources and operations that are registered are automatically exposed over the interfaces the management component supports (REST and CLI). This also means that additional interfaces can be added without needing to change the extensions.
Common Usability between Interfaces
By providing a simplified API the interfaces are built in a consistent manner. So managing the same component in one interface is very similar to managing it in another.
Hierarchical Representation
Instead of defining all resources and operations at one level (similar to JMX), the management component supports the registration of resources in an hierarchical manner. This makes the organization of resources cleaner and operation execution simpler. We will see how nice this is when dealing with the MOP (Model Object for Portal) management extension, which has been included in GateIn 3.2.
So now that we've covered the goals of the management component, lets take a look at one of the interfaces it provides, the CLI.
So what is this CLI ?
It's an interactive shell, built on top of CRaSH, when deployed to GateIn communicates over ssh to execute management operations. It supports common commands you would find an OS like ls and cd for listing and traversing managed resources. Tab completion and help commands are just some of the features the CLI provides to assist in managing portal resources.
So what can we do with this CLI ?
A lot really as long as a management extension supports the request. Commands like export and import are provided to help managing data easier, but the mgmt exec command is available to support custom management requests, which we will see some examples below. You can also write your own commands (documented in the CRaSH project) to communicate with the management component.
So where do we start ?
First we'll download GateIn 3.2 CR1 from here. I'll be using the JBoss AS 5 distribution running @ localhost.
Unzip the distribution to some directory which we will refer to as $JBOSS_HOME and startup the portal.
$ unzip GateIn-3.2.0-CR01-jbossas5.zip $ cd GateIn-3.2.0-CR01-jbossas5/bin $ ./run.sh
Now that the portal is running let's checkout and build the CLI application. To check out the source we will clone the gatein-management project from GitHub and checkout the 1.0.1-GA tag.
$ git clone git://github.com/gatein/gatein-management.git $ cd gatein-management $ git checkout 1.0.1-GA
and then built it.
$ mvn clean install
Remember you will need to add the jboss maven repository to your maven ~/.m2/settings.xml file as described here MavenGettingStarted-Developers.
Copy the CLI application to the GateIn server. Note: If you are deploying on JBoss AS5 you must copy the CLI as an exploded war
cp -r cli/target/gatein-management-cli $JBOSS_HOME/server/default/deploy/gatein-management-cli.war
Once deployed you should see in the server logs something similar to
INFO [SSHLifeCycle] CRaSSHD started on port 2000 INFO [PluginManager] Initialized plugin Plugin[type=SSHPlugin,interface=SSHPlugin]
which says the CLI application was deployed and is listening on the default port 2000. The port is configurable by editing the WEB-INF/crash/crash.properties file in the CLI war. The next step is to connect to the CLI over ssh connecting as the default GateIn portal admin root/gtn.
$ ssh -p 2000 root@localhost root@localhost's password: ______ .~ ~. |`````````, .'. ..'''' | | | |'''|''''' .''```. .'' |_________| | | `. .' `. ..' | | `.______.' | `. .' `. ....'' | | 1.0.0-beta24 Follow and support the project on http://crsh.googlecode.com GateIn Management CLI running @ sterling It is Fri Feb 10 13:19:34 EST 2012 now %
From here we can execute the help command to list available commands, you can add the --help or -h option to any command to view the usuage, or you can execute man <command> for detailed information. The first command we'll use is the mgmt connect command.
% mgmt connect --help usage: mgmt [-h | --help] connect [-u | --username] [-p | --password] [-c | --container] [-h | --help] command usage [-u | --username] the user name [-p | --password] the user password [-c | --container] portal container name (default is 'portal')
The mgmt connect command will connect us to the management system and will also allow us to connect to different portal containers and as a different user if we wish. Since we'll be accepting the defaults we can execute the command with no options, authenticating as root just as we did over ssh.
% mgmt connect Password: Successfully connected to gatein management system: [user=root, container='portal', host='localhost'] [ /]%
At this point we are at the root resource of the management system. We can use commands like cd and ls where cd changes the path of the resource and ls executes the 'read-resource' operation. So if we execute the ls command we will see the mop resource
[ /]% ls mop
that has been provided by the MOP management extension. But before we go into this, let's look at the ls command in more detail. As I indicated above the ls command executes the 'read-resource' operation; however, the 'read-resource' operation contains much more information then just the child resources. To see the full details of this operation we can use the mgmt exec command which allows us to build a management request manually.
[ /]% mgmt exec --help usage: mgmt [-h | --help] exec [-c | --contentType] [-f | --file] [-a | --attribute] [-o | --operation] path [-h | --help] command usage [-c | --contentType] content type of an operation [-f | --file] File name [-a | --attribute] Specifies an attribute. [-o | --operation] Operation name path
So to execute the 'read-resource' operation we can just add the --operation option to the command.
[ /]% mgmt exec --operation read-resource { "description": "Available operations and children (sub-resources).", "children": [{ "description": "MOP (Model Object for Portal) Managed Resource, responsible for handling management operations on navigation, pages, and sites.", "name": "mop" }], "operations": [{ "operation-description": "Lists information about a managed resource, including available operations and children (sub-resources).", "operation-name": "read-resource" }] }
This is the entire result of the 'read-resource' operation, which gives us information about which operations are supported and the children available at any given resource. We'll some more examples of usage later, but for the most part using the mgmt execcommand is for advanced purposes.
So let's see what the MOP extension gives us. We can continually cd and ls on every resource to navigate the MOP resources or we can cd directly to a resource by giving the full path. If we press tab on the path argument the CLI will auto complete it for us. For example we can type cd mop/ and hit tab and it will display all children. Typing mop/p and hitting tab will autocomplete to mop/portalsites and so on. So if we navigate to the site classic and list the resources there
[ /]% cd mop/portalsites/classic/ [classic]% ls pages navigation portal
we see a couple of things. First the resources of the MOP extension are structured very similarly to the structure of the MOP in GateIn. We also see three resoures that make up the MOP (pages, navigation, and site layout/portal config) which we will be exporting and importing. We can also view the xml for each of these resources by executing the cat command.
[classic]% cat navigation/notfound <?xml version='1.0' encoding='UTF-8'?> <node-navigation xmlns="http://www.gatein.org/xml/ns/gatein_objects_1_2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.gatein.org/xml/ns/gatein_objects_1_2 http://www.gatein.org/xml/ns/gatein_objects_1_2"> <priority>1</priority> <page-nodes> <parent-uri></parent-uri> <node> <name>notfound</name> <label>NotFound</label> <visibility>SYSTEM</visibility> </node> </page-nodes> </node-navigation>
The cat command just executes the 'read-config-as-xml' operation which we can see by manually executing the mgmt exec command and supplying it the --contentType option to specify the format of the result. Note: In later releases of gatein-management 'read-config-as-xml' will be renamed to just 'read-config' since the content type is what specifies the format.
[classic]% mgmt exec --operation read-config-as-xml --contentType xml navigation/notfound <?xml version='1.0' encoding='UTF-8'?> <node-navigation xmlns="http://www.gatein.org/xml/ns/gatein_objects_1_2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.gatein.org/xml/ns/gatein_objects_1_2 http://www.gatein.org/xml/ns/gatein_objects_1_2"> <priority>1</priority> <page-nodes> <parent-uri></parent-uri> <node> <name>notfound</name> <label>NotFound</label> <visibility>SYSTEM</visibility> </node> </page-nodes> </node-navigation>
Now what we'll want to do is actually modify this data using the export and import commands. We'll be exporting the site classic so let's first navigate back so we can refer to the classic resource instead of using "." for our export commands. The "." and ".." identifiers refer to the current resource and parent resource, respectively.
[classic]% cd .. [portalsites]%
A look into the export command we see two options: file and filter.
[portalsites]% export --help usage: export [-h | --help] [-f | --file] [-r | --filter] path [-h | --help] command usage [-f | --file] File name [-r | --filter] Specifies the value of the filter to use during an export for example. path
The --file option specifies where to write the export to. Note: Since we are communicating over ssh, all I/O operations are done on the portal server, which in our case is the same machine. We'll see how we can export and import with different machines using SCP later.
We can either give the export command the absolute file name or a directory, in which case it will append a timestamp to the name of the resource exported.
[portalsites]% export --file /tmp classic Export complete ! File location: /tmp/classic_2012-02-08_15-21-43.zip [portalsites]% export --file /tmp/classic.zip classic Export complete ! File location: /tmp/classic.zip
We can export at all levels, meaning you can export an entire portal container by exporting the mop resource, or down to individual pages and navigation nodes.
[portalsites]% export --file /tmp /mop/ Export complete ! File location: /tmp/mop_2012-02-08_16-23-44.zip [portalsites]% export --file /tmp classic/pages/homepage Export complete ! File location: /tmp/homepage_2012-02-08_16-24-27.zip [portalsites]% export --file /tmp classic/navigation/home Export complete ! File location: /tmp/home_2012-02-08_16-24-33.zip
We can also filter the export as well. So if we want to export only the homepage and sitemap pages we specify the --filter option like so
[portalsites]% export --file /tmp --filter page-name:homepage,sitemap classic/pages/ Export complete ! File location: /tmp/pages_2012-02-08_16-32-43.zip
The 'page-name' and the syntax of the filter option is explained in detail in the community docs I linked at the beginning of this blog.
So now that we have the data exported, we can modify the contents and use that to import to the portal. You can modify the contents however you want, but some knowledge gatein_objects XSD is required. The following will create a new site called 'demo', but is just an example. I am running Linux so this may not work exactly depending on your operating system. I'll be using the classic.zip I exported to my /tmp directory.
$ cd /tmp $ unzip classic.zip $ sed -i 's/classic/demo/g' portal/classic/navigation.xml $ sed -i 's/classic/demo/g' portal/classic/portal.xml $ sed -i 's/Classic/Demo/g' portal/classic/portal.xml $ sed -i 's/Default/SimpleSkin/g' portal/classic/portal.xml $ mv portal/classic/ portal/demo $ zip -r demo.zip portal/
Once the export has been modifed and is ready for import we can use the import command.
[portalsites]% import --help usage: importfile [-h | --help] [-f | --file] [-m | --importMode] path [-h | --help] command usage [-f | --file] File name [-m | --importMode] The import mode for an import operation path
The import command is very similar to the export command and defines the --file option to specify the file to use for the import. So to import the file demo.zip I created above
[portalsites]% import --file /tmp/demo.zip /mop/ Successfully imported file /tmp/demo.zip
The mop resource is the only resource that supports the import operation, so we must specify the /mop path argument. After the import is successful you should see something similar below in the server logs
INFO [MopImportResource] Preparing data for import. INFO [MopImportResource] Performing import using importMode 'merge' INFO [MopImportResource] Import successful !
This is indicating that the import was successful which at this point we should be able to log into the portal and see the new site demo.
If you noticed in the server logs the mention of an importMode. This is an attribute to the 'import-resource' operation and can be specified by using the --importMode option of the import command. Hitting tab after the import mode option in the CLI will list all available options (all these modes are explained in the community docs)
[portalsites]% import --file /tmp/demo.zip --importMode conserve merge overwrite insert
and to specify the overwrite option, which will delete and re-import all data in the zip
[portalsites]% import --file /tmp/demo.zip --importMode overwrite /mop/ Successfully imported file /tmp/demo.zip
we should see in the server logs that the import mode was changed to overwrite. Note: After importing pages it's likely you will need to log out of the portal since GateIn cache pages. However if you only change navigation a page refresh is only required to view the changes.
If we want to execute the import command using the mgmt exec command, we have to add two additional options that we haven't used yet; the --file and --attribute option. The --file option allows us to attach a file to the management request and the --attribute option allows us to pass attributes as part of the request. And since the demo.zip is the file we want to 'attach' and the 'importMode' is an attribute to the 'import-resource' operation, we can execute the same request using the mgmt exec
[portalsites]% mgmt exec --operation import-resource --contentType zip --attribute importMode=overwrite --file /tmp/demo.zip /mop/ Operation 'import-resource' at address '/mop' was successful.
This is just another example of how we can manually communicate with the management component using the mgmt exec command and how using built in commands that satisfy the request is a lot easier.
Secure Copy (SCP)
So as I mentioned before, all I/O operations using the CLI are done on the host machine (the portal server). If we want to export and import from/to different machines, we can use the SCP command that is available on most operating systems.
The management component accepts one or two file arguments as part of the file list of the SCP command, and can be defined as such:
scp -P <port> <user>@<host>:{container}:{path} <file>
Specifying the container is optional with the default value being 'portal'. The path is the path that we've seen in the CLI to specify the resource to export. So to export the site classic
$ scp -P 2000 'root@localhost:portal:/mop/portalsites/classic' classic.zip
Even though we are using the same machine (localhost), the same would apply but instead of localhost you would put the host of the portal.
The SCP command also supports the filter option but is used slightly differently then what we've seen in the CLI. To export all navigation nodes except home and sitemap we can do the following
scp -P 2000 'root@localhost:/mop/portalsites/classic/navigation?filter=nav-uri:!home,sitemap' navigation.zip
To import we just specify the local zip file as the first arguement to the SCP command.
scp -P 2000 classic.zip 'root@localhost:/mop'
and to specify the import mode (similar to what we did with the filter during export).
scp -P 2000 classic.zip 'root@localhost:/mop?importMode=overwrite'
Summary
So we've discussed the new management component, an interactive CLI to manage portal resources, and how easy we can export and import MOP data using the CLI or over SCP. Stay tuned for more information around gatein management, including a screencast of the CLI in action, a dive into the REST interface, and possibly a 'how to' on creating a custom management extension.