1. Is there a set schema (DDL) I can give to the DBAs to build the oracle database as their cluster is heavily secured and I don't know if we can just let MQ create and drop it's own tables as it pleases. They want us to share the same SID as the core application we are deploying.
No, there are currently no such schema files. Activemq will try to create all neccessary tables when it connects for the first time. As a workaround you could let ActiveMQ create everything in a dev database and then export the schema from there.
2. What are the security requirements MQ will require of Oracle to be able to manage its own tables.
Once the tables are there, ActiveMQ will only need read/write permissions to these tables.
3. Are there any implications of two indipendant MQ servers pointing at the same Oracle database at the same time and reading from common tables. We are using the transactional features as well so this is a highly ressiliant solution we are trying to build. We are only using queues and topics so any funnies around that I'd like to know as well.
That will not work, as the first instance that tries to access the DB will lock it by writing a row in the ACTIVEMQ_LOCK table. If you point two ActiveMQ instances to the same DB it will result in a master/slave setup, where the second instance remains on standby until it can lock the DB.
Currently its not possible to customize table prefixes used by ActiveMQ. As that's a valid feature to have, we have raised AMQ-2395
4. Can MQ be configured to point at an oracle cluster, will the URL support the normal connection string with multiple servers listed?
The server URL will not be processed by ActiveMQ but only be passed into the JDBC driver when trying to retrieve a JDBC connection. So if your server URL contains multiple server instances, it should work as long as the JDBC driver know how to connect using this URL. However I recommend you test it before.
5. Are there any issues I should be aware of that I may not have considered in this scenario. eg. do I need to ensure multicast is turned off etc.
No on the JDBC part. Instead of using direct JDBC persistence you might want to consider journaled JDBC which is much faster. Pure JDBC can be slow.
If you have both ActiveMQ instances in the same LAN and you want them to be fully independent of each other, then disable the multicast based network connector.
Some useful points there that raise some more questions for me.
What we are attempting to do is build a topology that provides a highly ressiliant message store that will be capable of surviving a complete server failure and be capable of failing over to a secondary server (warm standby).
We are only using Queues, we are not writting to them transactionally, however we do read from them transactionally. This raises the first question. Does not writing transactionally affect in any way how the individual messages are persisted in MQ?
Obviously anything that does not get persisted to a ressiliant queue store is subject to being lost, so we want to manage the queue in the most ressiliant means possible. The client prefers to use a database for this (though I realise a shared journal on a SAN is probably better performance wise).
In our topology, there will only ever be one consumer running against the queues on the primary server, and one would be started on the secondary server if we were to bring the warm standby machine online.
So is Master/Slave not an appropriate for this setup. Are we better off not even starting the MQ on the warm standby machine until it's needed. I'm guessing yes.
Can you envisage any way to impliment this using the database initially with the fastest transaction throughput but without sacrificing the absolute ressiliance of guaranteed transaction storage and deliver.
Our application isn't a banking app, but we also have to guarantee to deliver every transaction once and only once and ensure given a failure we can engage the warm standby and continue knowing we won't miss or duplicate a thing.
I'd appreciate your recommendations.
We settled on a primary MQ server with a secondary on standby and a single persistent Oracle DB data store and we fail over our application using a warm standby from one to the other in event of any issues.
That's done what we wanted anyway.