We have had discussions in the past about it. You should have client and server using the same time, and have time-zone set correctly.
We have an open jira to compensate the time from the server, but honesly I'm not sure how that could be done properly. We would be inventing a timing protocol where there are other solutions for that already.
We may come up with a solution, but the optimal solution is always to setup client and server to the same GMT time and have the timezones set correctly.
I am sure you have thought about this more than me, but what is the down side to only using the server's time?
That is to say... having the TTL begin from the moment the message is recieved by the server not sent from the client.
Then the clients time would be moot.
The TTL in its simplest is just its relative time or life span from the moment it is "available" for consumption on the server.
btw - One more comment about the proper setup of client and server time.
In my tests, the GMT time for the hosts were correct ( the same ).
Hornetq did not appear to take the timezone and DST information into account.
Is there something special I have to do in the creation of the message to indicate the timezone?
The JMS spec actually states that it should be calculated on the send method. However even if it didnt, Just because a message is sent doesn't mean it is available to a consumer, think of paging or if a message is redistributed around a cluster, it could be some time before the message is available so again this is non deterministic.