4 Replies Latest reply on May 13, 2017 8:33 AM by mayerw01

    postgreSql log file Errors

    ajitkumar2807

      My application is deployed on remote application server (Linux) which is wildfly 9.0 application server and from there it tries to connect to DB server (PostgreSql 9.4) which is again present on another remote server (Linux). I send a long message to my application server through JMS and this message processing takes many hours to get processed. But unfortunately I am getting  some issues of performance with DB server. When I see postgresql.log file I can see the below errors/warning:

       

      < 2017-05-05 09:18:00.676 CEST >LOG: could not receive data from client: Connection timed out

      < 2017-05-05 13:38:33.704 CEST >LOG: incomplete startup packet

      < 2017-05-05 13:42:29.158 CEST >LOG: unexpected EOF on client connection with an open transaction

      < 2017-05-05 13:50:49.163 CEST >LOG: checkpoints are occurring too frequently (1 second apart)
      < 2017-05-05 13:50:49.163 CEST >HINT: Consider increasing the configuration parameter "checkpoint_segments".

       

      After this timeout issue on postgres database server side, my app server returns the JMS reply to client side, but still the processing of my message continues on my application server and in the end it results in outOfMemory error. Do I need to update something in postgresql.conf file on database server? Can somebody please advise what should I follow to avoid these errors at the database end? I also tried by updating below parameters on the DB server side in order to keep the connection alive between my app server and DB server. But still after around  2 hours the DB server gives time out warnings as above and it affects my performance.

       

      net.ipv4.tcp_keepalive_time = 60

      net.ipv4.tcp_keepalive_intvl = 10

      net.ipv4.tcp_keepalive_probes = 6

        • 1. Re: postgreSql log file Errors
          jbertram

          For the PostgreSQL issue I think that's probably better handled by the PostgreSQL community.

           

          However, I do have some thoughts on the JMS side of things. If your consumer is operating in the context of a JTA transaction you could very well be following an anti-pattern where the long database transaction is causing the consumer's JTA transaction to time out which will cause a transaction rollback and, in most cases, will trigger a redelivery of the message which will start another long database transaction ad infinitum.  This could very well overwhelm your database and ultimately cause an OOME on the application server.

          • 2. Re: postgreSql log file Errors
            ajitkumar2807

            Thanks for your reply justine. Yes, we are using the JTA transaction. However we have defined the transaction time out as -1 in standalone-full-ha.xml file which is infinite. Could you please advise what could be done from application server perspective to keep the connection between application server and database server always alive and always connected. Thanks in advance.

            • 3. Re: postgreSql log file Errors
              jbertram

              However we have defined the transaction time out as -1 in standalone-full-ha.xml file which is infinite.

              In general an infinite transaction timeout is a bad idea. In fact, long-running transactions are an anti-pattern. You really want your transactions to be as short as possible so they aren't consuming resources which other operations need, and also so that the resource managers have to track as little as possible.  To be clear, the resource managers involved in your transaction (e.g. JMS provider, RDBMS, etc.) have to keep track of all the individual bits of work done in the transaction so that it can undo that work in case of a rollback.  Typically the longer the transaction runs the more the resource manager has to track (which is often in-memory) which puts more strain on the resource manager.

               

              Could you please advise what could be done from application server perspective to keep the connection between application server and database server always alive and always connected.

              I can't advise on that point.

              • 4. Re: postgreSql log file Errors
                mayerw01

                This message indiacates that there are a lot of updates or inserts.

                As stated in the message the suggested approach is to increase the parameter "checkpoint_segments". This  parameter  can  only  be  set  in  the postgresql.conf file or on the server command line.

                You may find more information in Documentation "Bulk operations such as large COPY transfers might cause a number of such warnings to appear if you have not set checkpoint_segments high enough."