Ramesh Bangaru wrote:
I am trying out HornetQ failover and facing one issue with secondary server is not able to announce as backup.
I have both HornetQ server connected in Cluster and one is Primary server and another one is back up server.
Both the Hornetq servers are running in the different machine.I am using Sharedstore mode and created one shared folder in
Primary server and enabled sharing option so that the secondary server also able to access the same folder.
When I start the Primary server, in Journal folder server.lock file is got created and secondary server log I could see the secondary server is unable to
find the server.lock file(File IO exception) so it is failing to announce backup server service message I am getting.
Please help me to resolve this issue.
1) server.lock file is getting created in the Journal floder.Is it the right behaviour?
2) I am not using any shared file system(Like NFS and SAN).Using normal file folder with sharing option.
so both Primary and secondary server sharing the same folder.
3)I am trying out this work in Mac machine.
How do you share the folder between the two machines if you don't use NFS or SAN? Did you use AFP?
I have used AFP only for file sharing between those 2 Mac machines.
In the secondary logs because of the server.lock backup server services is not started messages I am getting.
As a requirement you need to be able to do distributed lock on a file. If the locking mechanism over the file lock is not working, then you need to fix the shared folder system.
Please find attached the config files used in the 2 Jboss 7.1 server located in different machine.
Using AFP in the MAC machines I am sharing the Journal and other folder.
The server.lock file is getting created and the secondary Jboss server is unable to get access to it.
Please have a look at the attached config files and let me know using this can I achieve failover between 2 servers.
Also let me know how to fix the shared folder system.
Should we use either SAN or NFS to solve the shared folder system.
Looking forward your response.
The issue will be in the configuration of your shared file system. I'm not sure how I can help you on that.
All you need to is be able to guarantee a global lock on a file.
Are you using libaio on the journal?
I have solved the file lock issue by creating the NFS file system.
Now when I kill the primary server,the secondary server is announcing as back up correctly.
Now I want to write one sample client code which will use defined Queue and try to post the messages and need to test the failover behaviour.
I am using jndi in my client code which tightly coupled with user credentials and roles in a particular server for making ConnectionFactory.
Please find attached the sample client which I am using it for testing this failover.
Here I run the program and in between I kill the primary server and secondary will announce back up.
Primary server partly processed the posted message and secondary server should take it forward where the primary server left.
But I am not able to see any changes in the secondary server.
Could you please confirm is it right way to test the failover.
SampleProducer.java.zip 1.4 KB