1 Reply Latest reply on Mar 8, 2011 10:01 PM by Clebert Suconic Branched from an earlier discussion.

    Compacting errors when using sync=false

    lud Newbie

      Hello,

      we encounter the same error both in 2.1.2 and 2_2_EAP_GA tag of HornetQ (standalone, accessed via JMS through HA-JNDI).

      After 15' of load testing, the journaling compacting algorithm starts throwing such exception and the journal grows undefinitely.

       

      Below the log and our current configuration (based on HornetQ_2_2_EAP_GA)

       

      Any hints? We are sending objectmessages of an average size of 3-5 Kbytes at high rates (100 messages per second). Could the size be a cause of the problem?

       

      Thanks in advance,

      L.

       

      • Log file

       

      [main] 17:15:48,788 INFO [org.hornetq.core.server.impl.HornetQServerImpl]  HornetQ Server version 2.2.0.GA (Bzzzzz, 121) [08b9d9ef-44e2-11e0-9e72-e7e343026ef2] started

      [JournalImpl::CompactorExecutor] 17:32:12,562 WARNING [org.hornetq.core.journal.impl.JournalImpl]  Inconsistency during compacting: Delete record being read on an existent record (id=4295380528)

      java.lang.IllegalStateException: Inconsistency during compacting: Delete record being read on an existent record (id=4295380528)

                at org.hornetq.core.journal.impl.JournalCompactor.onReadDeleteRecord(JournalCompactor.java:369)

                at org.hornetq.core.journal.impl.JournalImpl.readJournalFile(JournalImpl.java:679)

                at org.hornetq.core.journal.impl.JournalImpl.compact(JournalImpl.java:1641)

                at org.hornetq.core.journal.impl.JournalImpl$4.run(JournalImpl.java:2258)

                at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

                at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

                at java.lang.Thread.run(Thread.java:619)

      [JournalImpl::CompactorExecutor] 17:32:12,565 WARNING [org.hornetq.core.journal.impl.JournalImpl]  Error on reading compacting for JournalFileImpl: (hornetq-data-102.hq id = 102, recordID = 102)

      [JournalImpl::CompactorExecutor] 17:32:13,139 SEVERE [org.hornetq.core.journal.impl.JournalImpl]  Error on reading compacting for JournalFileImpl: (hornetq-data-102.hq id = 102, recordID = 102)

      java.lang.Exception: Error on reading compacting for JournalFileImpl: (hornetq-data-102.hq id = 102, recordID = 102)

                at org.hornetq.core.journal.impl.JournalImpl.compact(JournalImpl.java:1646)

                at org.hornetq.core.journal.impl.JournalImpl$4.run(JournalImpl.java:2258)

                at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

                at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

                at java.lang.Thread.run(Thread.java:619)

      Caused by: java.lang.Exception: Inconsistency during compacting: Delete record being read on an existent record (id=4295380528)

                at org.hornetq.core.journal.impl.JournalImpl.readJournalFile(JournalImpl.java:764)

                at org.hornetq.core.journal.impl.JournalImpl.compact(JournalImpl.java:1641)

                ... 4 more

      Caused by: java.lang.IllegalStateException: Inconsistency during compacting: Delete record being read on an existent record (id=4295380528)

                at org.hornetq.core.journal.impl.JournalCompactor.onReadDeleteRecord(JournalCompactor.java:369)

                at org.hornetq.core.journal.impl.JournalImpl.readJournalFile(JournalImpl.java:679)

                ... 5 more

      [JournalImpl::CompactorExecutor] 17:35:38,518 WARNING [org.hornetq.core.journal.impl.JournalImpl]  Inconsistency during compacting: Delete record being read on an existent record (id=4295567123)

      java.lang.IllegalStateException: Inconsistency during compacting: Delete record being read on an existent record (id=4295567123)

                at org.hornetq.core.journal.impl.JournalCompactor.onReadDeleteRecord(JournalCompactor.java:369)

                at org.hornetq.core.journal.impl.JournalImpl.readJournalFile(JournalImpl.java:679)

                at org.hornetq.core.journal.impl.JournalImpl.compact(JournalImpl.java:1641)

                at org.hornetq.core.journal.impl.JournalImpl$4.run(JournalImpl.java:2258)

                at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

                at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

                at java.lang.Thread.run(Thread.java:619)

       

      • hornetq-configuration.xml

       

      <configuration xmlns="urn:hornetq"

                     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

                     xsi:schemaLocation="urn:hornetq /schema/hornetq-configuration.xsd">

       

       

                <clustered>true</clustered>

                <cluster-user>myClusterAdmin</cluster-user>

                <cluster-password>myClusterPwd</cluster-password>

       

       

                <management-address>jms.queue.hornetq.management</management-address>

                <message-counter-enabled>true</message-counter-enabled>

                <persistence-enabled>true</persistence-enabled>

       

       

                <scheduled-thread-pool-max-size>128</scheduled-thread-pool-max-size>

                <thread-pool-max-size>256</thread-pool-max-size>

                <journal-sync-non-transactional>false</journal-sync-non-transactional>

                <journal-sync-transactional>false</journal-sync-transactional>

       

       

                <paging-directory>${data.dir:../../HQ_active1/data}/paging</paging-directory>

                <bindings-directory>${data.dir:../../HQ_active1/data}/bindings</bindings-directory>

                <journal-directory>${data.dir:../../HQ_active1/data}/journal</journal-directory>

                <large-messages-directory>${data.dir:../../HQ_active1/data}/large-messages</large-messages-directory>

       

       

                <journal-min-files>10</journal-min-files>

                <journal-compact-min-files>50</journal-compact-min-files>

                <!--journal-compact-percentage>50</journal-compact-percentage-->

                <journal-type>NIO</journal-type>

                <journal-max-io>1</journal-max-io>

                <journal-buffer-timeout>333333333</journal-buffer-timeout>

       

       

         <!--we kill the server process so don't need this set to true-->

         <shared-store>true</shared-store>

         <failover-on-shutdown>false</failover-on-shutdown>

         <allow-failback>true</allow-failback>

         <security-enabled>false</security-enabled>

       

       

       

         <connectors>

            <connector name="netty-connector">

               <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</factory-class>

               <param key="host"  value="85.205.38.76"/>

               <param key="port"  value="13285"/>

            </connector>     

         </connectors>

       

       

         <acceptors>

            <acceptor name="netty-acceptor">

               <factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory</factory-class>

               <param key="host"  value="85.205.38.76"/>

               <param key="port"  value="13285"/>

            </acceptor>

          </acceptors>

       

       

      <broadcast-groups>

            <broadcast-group name="bg-group1">

                <!--local-bind-address></local-bind-address>

               <local-bind-port>13287</local-bind-port-->

               <group-address>239.0.5.42</group-address>

               <group-port>9876</group-port>

               <broadcast-period>5000</broadcast-period>

               <connector-ref>netty-connector</connector-ref>

            </broadcast-group>

         </broadcast-groups>

       

       

         <discovery-groups>

            <discovery-group name="dg-group1">

               <group-address>239.0.5.42</group-address>

               <group-port>9876</group-port>

               <refresh-timeout>10000</refresh-timeout>

            </discovery-group>

         </discovery-groups>

       

                <cluster-connections>

                          <cluster-connection name="my-cluster">

                                    <address>jms</address>

                                    <connector-ref>netty-connector</connector-ref>

                                    <discovery-group-ref discovery-group-name="dg-group1"/>

                          </cluster-connection>

                </cluster-connections>

       

       

         <security-settings>

            <security-setting match="#">

               <permission type="createNonDurableQueue" roles="guest"/>

               <permission type="deleteNonDurableQueue" roles="guest"/>

               <permission type="consume" roles="guest"/>

               <permission type="send" roles="guest"/>

                           <permission type="manage" roles="guest" />

            </security-setting>

         </security-settings>

       

       

         <address-settings>

            <!--default for catch all-->

            <address-setting match="#">

               <dead-letter-address>jms.queue.DLQ</dead-letter-address>

               <expiry-address>jms.queue.ExpiryQueue</expiry-address>

               <redelivery-delay>0</redelivery-delay>

               <max-size-bytes>1073741824</max-size-bytes>      

               <message-counter-history-day-limit>10</message-counter-history-day-limit>

               <address-full-policy>BLOCK</address-full-policy>

            </address-setting>

         </address-settings>

       

       

      </configuration>

       

      • hornetq-jms.xml

      <configuration xmlns="urn:hornetq"

                       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

                       xsi:schemaLocation="urn:hornetq /schema/hornetq-jms.xsd">

       

                <connection-factory name="NettyConnectionFactory">

                          <call-timeout>600000</call-timeout>

                          <!--connection-ttl>-1</connection-ttl-->

                          <block-on-non-durable-send>true</block-on-non-durable-send>

                          <discovery-group-ref discovery-group-name="dg-group1"/>

                           <failover-on-initial-connection>true</failover-on-initial-connection>

                          <connectors>

                                    <connector-ref connector-name="netty-connector"/>

                          </connectors>

       

                          <entries>

                                    <entry name="/ConnectionFactory"/>

                                    <entry name="/XAConnectionFactory"/>

                          </entries>

       

       

                          <confirmation-window-size>1048576</confirmation-window-size>

       

                          <ha>true</ha>

       

       

                          <!-- Pause 1 second between connect attempts -->

                          <retry-interval>1000</retry-interval>

       

       

                          <!-- Multiply subsequent reconnect pauses by this multiplier. This can be used to

                          implement an exponential back-off. For our purposes we just set to 1.0 so each reconnect

                          pause is the same length -->

                          <retry-interval-multiplier>1.0</retry-interval-multiplier>

       

       

                          <!-- Try reconnecting an unlimited number of times (-1 means "unlimited") -->

                          <reconnect-attempts>-1</reconnect-attempts>

       

                </connection-factory>

       

       

                <queue name="DLQ">

                          <entry name="/queue/DLQ"/>

                          <durable>true</durable>

                </queue>

                <queue name="ExpiryQueue">

                          <entry name="/queue/ExpiryQueue"/>

                          <durable>true</durable>

                </queue>  

       

       

                <queue name="ProcessingRequestQueue_IT">

                      <entry name="/queue/ProcessingRequestQueue_IT"/>

                          <durable>true</durable>

                </queue>

                <queue name="DeliveryRequestQueue_IT">

                      <entry name="/queue/DeliveryRequestQueue_IT"/>

                          <durable>true</durable>

                </queue>

                <queue name="DeliveryRetryQueue_IT">

                      <entry name="/queue/DeliveryRetryQueue_IT"/>

                          <durable>true</durable>

                </queue>

                <queue name="TransactionUpdateQueue_IT">

                      <entry name="/queue/TransactionUpdateQueue_IT"/>

                          <durable>true</durable>

                </queue>

                <queue name="TransactionSelectQueue_IT">

                      <entry name="/queue/TransactionSelectQueue_IT"/>

                          <durable>true</durable>

                </queue>

                <queue name="TransactionSelectResultQueue_IT">

                      <entry name="/queue/TransactionSelectResultQueue_IT"/>

                          <durable>true</durable>

                </queue>

                <queue name="SubscriptionUpdateQueue_IT">

                          <entry name="/queue/SubscriptionUpdateQueue_IT" />

                          <durable>true</durable>

                </queue>

                <!-- Global Topics -->

                <topic name="OpCoUpdatesTopic">

                          <entry name="/topic/OpCoUpdatesTopic" />

                </topic>

       

                <topic name="WorkflowUpdatesTopic">

                          <entry name="/topic/WorkflowUpdatesTopic" />

                </topic>

       

                <topic name="ProcessingOpCoUpdatesTopic">

                          <entry name="/topic/ProcessingOpCoUpdatesTopic" />

                </topic>

       

                <topic name="CacheUpdateTopic">

                          <entry name="/topic/CacheUpdateTopic" />

                </topic>

       

       

              <queue name="TestQueue_IT">

                      <entry name="/queue/TestQueue_IT" />

                          <durable>true</durable>

              </queue>

       

       

       

      </configuration>


        • 1. Journaling errors
          Clebert Suconic Master

          Bingo: you have a different issue:

           

           

          keep journal-sync = true for transactions and non-transactions (the default values).

           

           

          What's happening is that the delete is jumping ahead of the tx.commit() what's causing the compactor to read the delete before the commit was done.

           

           

           

          I have changed the compactor routine to not throw an exception. Meanwhile you can just keep it as true.

           

          I will see what could be done with journal-sync=false. (Maybe you could create a JIRA for compactor issues when journal-sync=false).

           

           

          At the moment just keep it as true.