-
1. Re: postgreSql log file Errors
jbertram May 11, 2017 10:02 AM (in response to ajitkumar2807)For the PostgreSQL issue I think that's probably better handled by the PostgreSQL community.
However, I do have some thoughts on the JMS side of things. If your consumer is operating in the context of a JTA transaction you could very well be following an anti-pattern where the long database transaction is causing the consumer's JTA transaction to time out which will cause a transaction rollback and, in most cases, will trigger a redelivery of the message which will start another long database transaction ad infinitum. This could very well overwhelm your database and ultimately cause an OOME on the application server.
-
2. Re: postgreSql log file Errors
ajitkumar2807 May 11, 2017 11:46 PM (in response to ajitkumar2807)Thanks for your reply justine. Yes, we are using the JTA transaction. However we have defined the transaction time out as -1 in standalone-full-ha.xml file which is infinite. Could you please advise what could be done from application server perspective to keep the connection between application server and database server always alive and always connected. Thanks in advance.
-
3. Re: postgreSql log file Errors
jbertram May 12, 2017 10:02 AM (in response to ajitkumar2807)However we have defined the transaction time out as -1 in standalone-full-ha.xml file which is infinite.
In general an infinite transaction timeout is a bad idea. In fact, long-running transactions are an anti-pattern. You really want your transactions to be as short as possible so they aren't consuming resources which other operations need, and also so that the resource managers have to track as little as possible. To be clear, the resource managers involved in your transaction (e.g. JMS provider, RDBMS, etc.) have to keep track of all the individual bits of work done in the transaction so that it can undo that work in case of a rollback. Typically the longer the transaction runs the more the resource manager has to track (which is often in-memory) which puts more strain on the resource manager.
Could you please advise what could be done from application server perspective to keep the connection between application server and database server always alive and always connected.
I can't advise on that point.
-
4. Re: postgreSql log file Errors
mayerw01 May 13, 2017 8:33 AM (in response to ajitkumar2807)This message indiacates that there are a lot of updates or inserts.
As stated in the message the suggested approach is to increase the parameter "checkpoint_segments". This parameter can only be set in the postgresql.conf file or on the server command line.
You may find more information in Documentation "Bulk operations such as large COPY transfers might cause a number of such warnings to appear if you have not set checkpoint_segments high enough."