7 Replies Latest reply on Feb 3, 2015 2:48 PM by jbertram

    Changing filter on a running queue

    jgabrielygalan

      Hi,

       

      I have a running standalone HornetQ with some queues bound to an address with a filter, with live traffic, so producers and consumers working normally. All these queues are predefined in the hornetq-configuration,xml and are persistent. I would like to understand what would be the operational procedure to modify one of the queues' filter definition. We've tried to change it in the configuration file and restarting, but it seems it's not picking up the new filter definition. We haven't found a way to do this using JMX. Via JMX we could drop the queue (first we would need to stop the consumers) and recreate it, but we will lose some messages while we do that.

       

      So, what is the best practice to do this?

        • 1. Re: Changing filter on a running queue
          jbertram

          We've tried to change it in the configuration file and restarting, but it seems it's not picking up the new filter definition.

          Can you explain a bit more about the problem you had in this use-case?  What behavior did you expect and what behavior did you actually observe?

           

          We haven't found a way to do this using JMX. Via JMX we could drop the queue (first we would need to stop the consumers) and recreate it, but we will lose some messages while we do that.

          Before you drop the queue you could create a new, temporary queue and move all the existing messages into that queue.  Then you could delete and re-create the original queue with the new filter and move all the messages back and finally delete the temporary queue you created.

           

          However, if you find that your filter is changing frequently you may want to configure the filter on the consumer rather than on the server.

          • 2. Re: Re: Changing filter on a running queue
            jgabrielygalan

            We've tried to change it in the configuration file and restarting, but it seems it's not picking up the new filter definition.

            Can you explain a bit more about the problem you had in this use-case?  What behavior did you expect and what behavior did you actually observe?

             

            We keep our configuration files (hornetq-configuration.xml, etc) in git, and we have a jenkins job to deploy the hornetq instance to production and staging. We have some consumers for these queues that belong to other departments in the company, so we discuss with them the queues, filters, etc that they need and then change the configuration, store it in git and redeploy the environments.

             

            This has worked fine when adding queues (we serve several departments), diverts, etc. But we found the following problem when trying to change a filter. We have everything running: a producer sending to an address, a queue bound to that address with a filter and some consumes consuming from that queue. We modified the hornetq-configuration.xml, push it to git, then redeploy the hornetq standalone server with the new configuration file and restart. The result is that the queue is still bound to the address with the old filter. It didn't took the new filter definition present in the hornetq-configuration,xml file.

             

             

             

            We haven't found a way to do this using JMX. Via JMX we could drop the queue (first we would need to stop the consumers) and recreate it, but we will lose some messages while we do that.

            Before you drop the queue you could create a new, temporary queue and move all the existing messages into that queue.  Then you could delete and re-create the original queue with the new filter and move all the messages back and finally delete the temporary queue you created.

             

             

            If I understand correctly, I can create a new queue bound to the same address with the filter, so I don't lose any message. Now the messages are getting to both queues. Now I copy the messages from old queue into this one. First question is, what happens in this case to duplicated messages? Cause in this process, some messages will be present in both queues. Assuming everything is ok, so messages are not duplicated, I drop the queue and recreate it with the new filter. Then I can move again the messages from the temporary queue, which also might contain duplicates now, since the producers are still producing.

             

             

             

            However, if you find that your filter is changing frequently you may want to configure the filter on the consumer rather than on the server.

             

            I'll take a look into this, but I'd rather control the filters myself (consumers are responsibility of other departments).

             

            Thanks.

            • 3. Re: Re: Changing filter on a running queue
              jbertram

              We modified the hornetq-configuration.xml, push it to git, then redeploy the hornetq standalone server with the new configuration file and restart. The result is that the queue is still bound to the address with the old filter. It didn't took the new filter definition present in the hornetq-configuration,xml file.

              What do you mean "the queue is still bound to the address with the old filter"?  What behavior did you expect and what behavior did you actually observe?  Are messages which should match the new filter not getting through while messages that match the old filter are getting through?

               

               

              If I understand correctly, I can create a new queue bound to the same address with the filter, so I don't lose any message. Now the messages are getting to both queues. Now I copy the messages from old queue into this one. First question is, what happens in this case to duplicated messages? Cause in this process, some messages will be present in both queues. Assuming everything is ok, so messages are not duplicated, I drop the queue and recreate it with the new filter. Then I can move again the messages from the temporary queue, which also might contain duplicates now, since the producers are still producing.

              My previous suggestion was made under the assumption that you would be able to stop the producers during this process.  If you can't stop the producers then you're likely to either lose messages or get duplicates at some point along the way since there's no way to batch management operations into an atomic transaction so that they all happen at exactly the same time.  At the end of the day, I don't fully understand your use-case so it's hard to provide an exhaustive solution.

               

               

              I'll take a look into this, but I'd rather control the filters myself (consumers are responsibility of other departments).

              The fact that the consumers are the responsibility of the other departments is exactly why they might need to control the filters themselves.  Previously you said, "We have some consumers for these queues that belong to other departments in the company, so we discuss with them the queues, filters, etc that they need and then change the configuration, store it in git and redeploy the environments."  Why not take the filters out of this equation here and let the other departments control them since they are controlling the consumers anyway.  Don't they know what messages they want (i.e. what filter they need to use)?  Why does this need to be necessarily enforced on the server?  Instead of rebuilding and redeploying the environment when a new filter is needed (which interrupts every client) why can't the consumer simply use a different filter?

              • 4. Re: Re: Re: Changing filter on a running queue
                jgabrielygalan

                What do you mean "the queue is still bound to the address with the old filter"?  What behavior did you expect and what behavior did you actually observe?  Are messages which should match the new filter not getting through while messages that match the old filter are getting through?

                 

                With our system up and running I connect with JMX, I check the Queue's Filter attribute and it shows me the value I set in the configuration file (hornetq-configuration.xml):

                 

                    <queue name="cat-tracking">
                      <address>billing-platform-notifications</address>
                      <durable>true</durable>
                      <filter string="(type='subscription') OR (type='unsubscription')"/>
                    </queue>

                 

                So, in JMX, the Filter attribute is: (type='subscription') OR (type='unsubscription')

                Then, I go change the configuration file to this:

                 

                    <queue name="cat-tracking">
                      <address>billing-platform-notifications</address>
                      <durable>true</durable>
                      <filter string="(type='subscription') OR (type='unsubscription') OR (type='renew')"/>
                    </queue>

                 

                I restart hornetq, connect again with JMX to check the Queue's filter attribute, and it's still the old one: (type='subscription') OR (type='unsubscription'). It hasn't been updated to the new value.

                 

                My previous suggestion was made under the assumption that you would be able to stop the producers during this process.  If you can't stop the producers then you're likely to either lose messages or get duplicates at some point along the way since there's no way to batch management operations into an atomic transaction so that they all happen at exactly the same time.  At the end of the day, I don't fully understand your use-case so it's hard to provide an exhaustive solution.

                 

                We have some (100+) components processing billing operations. Each of them is producing messages (notifications) for other components in my company (several million operations per day). For example, there's a department in charge of notifying external parties of some of the billing operations, other departments are tracking events for stats (business intelligence), etc. There are several of them. My use case is to provide a reliable and durable system to have all those consumers receive each and every notification of an operation that was performed by my platform and they are interested in. Each consumer is interested in a subset of all notifications, be it by type or maybe other properties of the message (country, etc). I can't stop the producers. Our system has not been designed with this in mind, and right now it would be an operational nightmare. We have a check that if hornetq is down we spool notifications to disk, to be recovered later when it's up, but we don't have a way to tell 100+ processes in many different machines to go into this mode with a command or anything, for maintenance, so right now it's not possible, maybe we should think about it, if it's the better way.

                 

                Regarding the atomicity I think that we need several operations due to the fact that there's no single operation to udpate the binding definition of a queue. The only operation I need is to update a predefined queue's filter.

                 

                The fact that the consumers are the responsibility of the other departments is exactly why they might need to control the filters themselves

                Why not take the filters out of this equation here and let the other departments control them since they are controlling the consumers anyway.  Don't they know what messages they want (i.e. what filter they need to use)?  Why does this need to be necessarily enforced on the server?

                 

                I think I might be missing some concept or way to look at it. My line of reasoning was the following: I need to have some predefined queues, cause I must ensure that no message is lost, whether the consumers are connected or not. So I configure the queues in the hornetq-configuration.xml file. Now, every consumer needs a subset, so I have to put a filter, so that in their queue they only get what they want. I asked for use cases, define a filter and also set it in the config file. Now i have a single address I publish to, and n durable queues, one for each consumer's use case.

                 

                I have some questions that might help you understand what are my conceptual gaps and help you show me the light .

                - If the consumers are in charge of defining the filter when they connect, does this mean that I have to define a queue with no filter?

                - If this is the case, won't this queue fill up with unwanted messages because those will never be consumed?

                 

                In general I think I have some preconceptions of a "static" configuration of the server, as opposed to a more dynamic, runtime defined behaviour of the system, so maybe I need some enlightement . Specially about how to ensure that we don't lose messages (queues are durable, are predefined, etc).

                 

                Thanks.

                • 5. Re: Re: Re: Changing filter on a running queue
                  jbertram

                  I restart hornetq, connect again with JMX to check the Queue's filter attribute, and it's still the old one: (type='subscription') OR (type='unsubscription'). It hasn't been updated to the new value.

                  Finally, a simple and functional description of the problem.  This is something I can actually work with.  Thanks.

                   

                  BTW, what version of HornetQ are you using?

                   

                   

                  Regarding the atomicity I think that we need several operations due to the fact that there's no single operation to udpate the binding definition of a queue. The only operation I need is to update a predefined queue's filter.

                  Yes, that's clearly the case.  However, I'm trying to give you options to work with what you have rather than just saying we'll add that in the future and making you wait for a new feature to be implemented and released.

                   

                   

                  My line of reasoning was the following: I need to have some predefined queues, cause I must ensure that no message is lost, whether the consumers are connected or not. So I configure the queues in the hornetq-configuration.xml file. Now, every consumer needs a subset, so I have to put a filter, so that in their queue they only get what they want. I asked for use cases, define a filter and also set it in the config file. Now i have a single address I publish to, and n durable queues, one for each consumer's use case.

                  Why not define a single queue in hornetq-configuration.xml (so that no messages are lost) and then just let the consumers decide what messages they want (i.e. define their own filter programmatically on the client-side)?  That would be much simpler for you and more flexible for everyone.  It would cut out the whole process of getting use-cases from the consumers, tailoring the queue filters on the server-side, and then changing the filters on the server-side when necessary.

                   

                   

                  If the consumers are in charge of defining the filter when they connect, does this mean that I have to define a queue with no filter?

                  You don't have to define a queue with no filter.  The consumers can define their own queue if they want, but that would mean they would miss messages sent before they connected.  If you want to capture all messages sent to the server before the client connects than you'd have to define a queue in the configuration.

                   

                   

                  If this is the case, won't this queue fill up with unwanted messages because those will never be consumed?

                  I guess that depends on whether or not producers are sending unwanted messages that the clients will never consume.  If so, then yes those unwanted messages will accumulate in the queue.  You can remove those messages administratively by using the removeMessages operation with a filter or you can have the producer set a TTL on the messages so it expires after a certain time or you can configure a default <expiry-delay> on the server so that all messages without their own TTL expire after a certain amount of time.

                  • 6. Re: Re: Re: Re: Changing filter on a running queue
                    jgabrielygalan

                    Finally, a simple and functional description of the problem.  This is something I can actually work with.  Thanks.

                     

                    Sorry about not being more clear before.

                     

                    BTW, what version of HornetQ are you using?

                    2.4.1.Final (Fast Hornet, 124)

                     

                    However, I'm trying to give you options to work with what you have rather than just saying we'll add that in the future and making you wait for a new feature to be implemented and released.

                    Sure, it's greatly appreciated.

                     

                    Why not define a single queue in hornetq-configuration.xml (so that no messages are lost) and then just let the consumers decide what messages they want (i.e. define their own filter programmatically on the client-side)?  That would be much simpler for you and more flexible for everyone.  It would cut out the whole process of getting use-cases from the consumers, tailoring the queue filters on the server-side, and then changing the filters on the server-side when necessary.

                     

                    I have to think about this, but it might sound like the best way to go. I would need one queue for each consumer, cause each of needs their own copy of the message, but I see the point.

                     

                    If you want to capture all messages sent to the server before the client connects than you'd have to define a queue in the configuration.

                     

                    Yes, that's actually the case, so that's why all our ideas started with predefined durable queues.

                     

                    I guess that depends on whether or not producers are sending unwanted messages that the clients will never consume.  If so, then yes those unwanted messages will accumulate in the queue.  You can remove those messages administratively by using the removeMessages operation with a filter or you can have the producer set a TTL on the messages so it expires after a certain time or you can configure a default <expiry-delay> on the server so that all messages without their own TTL expire after a certain amount of time.

                     

                    Yes, if I create a queue for each consumer (department) without filter, there will certainly be some unwanted messages in each queue, so I'd need to make some tests and evaluate the number of unwanted messages, so I can tune this TTL to a good value between coping with slow/stopped consumers, and the disk space that the messages will be producing.

                     

                    Anyway, thanks a lot for your support and your patience, I see now that I wasn't too far off in my ideas, but you showed me some alternatives that are worth exploring. Please let me know anyway if you find something on the first point about restarting with a new version of the config file (Shall I mark this question as answered even though this point is still pending?) , and also on the future possibility of hot-updating a filter on a queue with producers and consumers running.

                     

                    Thanks, Justin.

                    • 7. Re: Re: Re: Re: Changing filter on a running queue
                      jbertram

                      I believe I see in the code where the queue binding information that is loaded from disk (from the bindings journal) will be used instead of the configuration from XML if there is a naming collision (i.e. the names of the queues are the same).  That's almost certainly why your updated XML is being disregarded in favor of the old configuration loaded from disk.

                       

                      I'll need to discuss this with the other developers to see what (if anything) should be done about it.