Its either 2.1.0 or 2.1.1 (which I think is most likely). I cant check it until i get in tomorrow.
In either case its the same version which runs ok like this accross two dev machines (ubuntu), but fails to restart after synching the data in on two redhat machines, one with 4k block sizes, the other with 8K blocks.
In each case if you ls -la the filesizes are the same, on both machines, its only if you df etc that you notice the block size discrepancy.
So far its the only thing i can think of.
Do you think the OS underlying block sizing could cause the issue, even though the the files look to be the same size?
Or could there be an issue, if one instance has libAIO installed and one doesnt? [I know one machine has it installed on]
Version is: HornetQ Server version 2.1.0.Final (marimbondo, 118)
Can you try starting your system with trunk? It would be safe at the moment.
We will push a release in the next few days.
I think we have determined the issue.
In dev i was running on two file systems, 4K block size allocations, libAIO on live, NIO on backup.
In uat, two filesystems, live with 8K block sizes, and backup with 4k block sizes (awaiting verfication from sysadmins)
In dev when live fails, and the system is restored from backup all works fine.
In uat it fails.
We have ensured that both run on libAIO in uat and it seems to work.
Its interesting that its not just a case of NIO vs libAIO as that was already checked in dev (accidentally).
I will advise as more information is discovered, but its certainly been a bit of a heart stopping moment.
With libaio we always align by 512 while writing... I'm not aware of any case libaio would need to align by 4K or 8K. You would actually get errors and messages if that was the case as we always align by 512.
Also, trunk should be able to open a journal created by libaio and vice versa.
I will fix it If that's not the case. Can you try trunk and see how it goes please?