1 2 Previous Next 15 Replies Latest reply on Sep 18, 2012 5:25 PM by ndipiazza

    HQ 2.2.13 - Cannot seem to set up MDB with failover using hard coded list of hosts

    ndipiazza

      This question is related to: https://community.jboss.org/message/753626

       

      We have seen this same problem in both JBoss 7.1.1.Final (HornetQ 2.2.13), but we also verified it happens in 7.1.2.Final (HornetQ 2.2.16) as well. For now, we continue testing on Jboss 7.1.1.Final.

       

      Summary of the issue

       

      MDB HornetQ Client cannot seem to subscribe to any more than one HornetQ Server. When attempting to connect to more than one, fails in a loop.

       

      We can get this to work with a single node in the connection list, and we can get failover to work if we use a standalone java client using the HQ client API.

       

      More details and steps to reproduce

       

      We have a topic "ndipiazza.pl.testing" deployed on a JBoss HornetQ cluster with 2 nodes - each deployed using standalone-full-ha.xml.

       

      Here is the messaging subsystem for one of the HornetQ server nodes:

       

      <subsystem xmlns="urn:jboss:domain:messaging:1.1">

          <hornetq-server>

              <clustered>true</clustered>

              <security-enabled>false</security-enabled>

              <persistence-enabled>true</persistence-enabled>

              <journal-file-size>102400</journal-file-size>

              <journal-min-files>2</journal-min-files>

              <connectors>

                  <netty-connector name="netty" socket-binding="messaging" />

                  <netty-connector name="netty-throughput"

                      socket-binding="messaging-throughput">

                      <param key="batch-delay" value="50" />

                  </netty-connector>

                  <in-vm-connector name="in-vm" server-id="0" />

              </connectors>

              <acceptors>

                  <netty-acceptor name="netty" socket-binding="messaging" />

                  <netty-acceptor name="netty-throughput"

                      socket-binding="messaging-throughput">

                      <param key="batch-delay" value="50" />

                      <param key="direct-deliver" value="false" />

                  </netty-acceptor>

                  <in-vm-acceptor name="in-vm" server-id="0" />

              </acceptors>

              <broadcast-groups>

                  <broadcast-group name="bg-group1">

                      <group-address>231.7.7.7</group-address>

                      <group-port>9876</group-port>

                      <broadcast-period>5000</broadcast-period>

                      <connector-ref>netty</connector-ref>

                  </broadcast-group>

              </broadcast-groups>

              <discovery-groups>

                  <discovery-group name="dg-group1">

                      <group-address>231.7.7.7</group-address>

                      <group-port>9876</group-port>

                      <refresh-timeout>10000</refresh-timeout>

                  </discovery-group>

              </discovery-groups>

              <cluster-connections>

                  <cluster-connection name="my-cluster">

                      <address>jms</address>

                      <connector-ref>netty</connector-ref>

                      <discovery-group-ref discovery-group-name="dg-group1" />

                  </cluster-connection>

              </cluster-connections>

              <security-settings>

                  <security-setting match="#">

                      <permission type="send" roles="guest" />

                      <permission type="consume" roles="guest" />

                      <permission type="createNonDurableQueue" roles="guest" />

                      <permission type="deleteNonDurableQueue" roles="guest" />

                  </security-setting>

              </security-settings>

              <address-settings>

                  <!--default for catch all -->

                  <address-setting match="#">

                      <dead-letter-address>jms.queue.DLQ</dead-letter-address>

                      <expiry-address>jms.queue.ExpiryQueue</expiry-address>

                      <redelivery-delay>0</redelivery-delay>

                      <redistribution-delay>1000</redistribution-delay>

                      <max-size-bytes>10485760</max-size-bytes>

                      <address-full-policy>BLOCK</address-full-policy>

                      <message-counter-history-day-limit>10</message-counter-history-day-limit>

                  </address-setting>

              </address-settings>

              <jms-connection-factories>

                  <connection-factory name="InVmConnectionFactory">

                      <connectors>

                          <connector-ref connector-name="in-vm" />

                      </connectors>

                      <entries>

                          <entry name="java:/ConnectionFactory" />

                      </entries>

                  </connection-factory>

                  <connection-factory name="RemoteConnectionFactory">

                      <connectors>

                          <connector-ref connector-name="netty" />

                      </connectors>

                      <entries>

                          <entry name="RemoteConnectionFactory" />

                          <entry name="java:jboss/exported/jms/RemoteConnectionFactory" />

                      </entries>

                      <ha>true</ha>

                  </connection-factory>

                  <pooled-connection-factory name="hornetq-ra">

                      <transaction mode="xa" />

                      <connectors>

                          <connector-ref connector-name="in-vm" />

                      </connectors>

                      <entries>

                          <entry name="java:/JmsXA" />

                      </entries>

                  </pooled-connection-factory>

              </jms-connection-factories>

              <jms-destinations>

                  <jms-topic name="ndipiazza.pl.testing">

                      <entry name="ndipiazza.pl.testing" />

                      <entry name="java:jboss/exported/ndipiazza.pl.testing" />

                  </jms-topic>

              </jms-destinations>

          </hornetq-server>

      </subsystem>

       

      If we setup a Standalone Java client using the HornetQ client API, we are able to subscribe to the topic as well as failover between the two nodes. This works great.

       

      But what we really want is an MDB defined on a 3rd JBoss 7.1.1.Final server that is acting as the HornetQ JMS client.

       

      This uses standalone-full-ha.xml and here is the messaging subsystem on this node:

       

       

      <subsystem xmlns="urn:jboss:domain:messaging:1.1">

          <hornetq-server>

              <persistence-enabled>true</persistence-enabled>

              <security-enabled>false</security-enabled>

              <journal-file-size>102400</journal-file-size>

              <journal-min-files>2</journal-min-files>

              <connectors>

                  <connector name="remote-jmsxa1">

                      <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory

                      </factory-class>

                      <param key="host" value="192.168.16.129" />

                      <param key="port" value="5445" />

                  </connector>

                  <connector name="remote-jmsxa2">

                      <factory-class>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory

                      </factory-class>

                      <param key="host" value="192.168.19.175" />

                      <param key="port" value="5445" />

                  </connector>

                  <netty-connector name="netty" socket-binding="messaging" />

                  <netty-connector name="netty-throughput"

                      socket-binding="messaging-throughput">

                      <param key="batch-delay" value="50" />

                  </netty-connector>

                  <in-vm-connector name="in-vm" server-id="0" />

              </connectors>

              <acceptors>

                  <netty-acceptor name="netty" socket-binding="messaging" />

                  <netty-acceptor name="netty-throughput"

                      socket-binding="messaging-throughput">

                      <param key="batch-delay" value="50" />

                      <param key="direct-deliver" value="false" />

                  </netty-acceptor>

                  <in-vm-acceptor name="in-vm" server-id="0" />

              </acceptors>

              <broadcast-groups>

                  <broadcast-group name="bg-group1">

                      <group-address>231.7.7.7</group-address>

                      <group-port>9876</group-port>

                      <broadcast-period>5000</broadcast-period>

                      <connector-ref>

                          netty

                      </connector-ref>

                  </broadcast-group>

              </broadcast-groups>

              <discovery-groups>

                  <discovery-group name="dg-group1">

                      <group-address>231.7.7.7</group-address>

                      <group-port>9876</group-port>

                      <refresh-timeout>10000</refresh-timeout>

                  </discovery-group>

              </discovery-groups>

              <cluster-connections>

                  <cluster-connection name="my-cluster">

                      <address>jms</address>

                      <connector-ref>netty</connector-ref>

                      <discovery-group-ref discovery-group-name="dg-group1" />

                  </cluster-connection>

              </cluster-connections>

              <security-settings>

                  <security-setting match="#">

                      <permission type="send" roles="guest" />

                      <permission type="consume" roles="guest" />

                      <permission type="createNonDurableQueue" roles="guest" />

                      <permission type="deleteNonDurableQueue" roles="guest" />

                  </security-setting>

              </security-settings>

              <address-settings>

                  <address-setting match="#">

                      <dead-letter-address>jms.queue.DLQ</dead-letter-address>

                      <expiry-address>jms.queue.ExpiryQueue</expiry-address>

                      <redelivery-delay>0</redelivery-delay>

                      <max-size-bytes>10485760</max-size-bytes>

                      <address-full-policy>BLOCK</address-full-policy>

                      <message-counter-history-day-limit>10

                      </message-counter-history-day-limit>

                      <redistribution-delay>1000</redistribution-delay>

                  </address-setting>

              </address-settings>

              <jms-connection-factories>

                  <connection-factory name="InVmConnectionFactory">

                      <connectors>

                          <connector-ref connector-name="in-vm" />

                      </connectors>

                      <entries>

                          <entry name="java:/ConnectionFactory" />

                      </entries>

                  </connection-factory>

                  <connection-factory name="RemoteConnectionFactory">

                      <connectors>

                          <connector-ref connector-name="netty" />

                      </connectors>

                      <entries>

                          <entry name="RemoteConnectionFactory" />

                          <entry name="java:jboss/exported/jms/RemoteConnectionFactory" />

                      </entries>

                  </connection-factory>

                  <pooled-connection-factory name="hornetq-ra">

                      <transaction mode="xa" />

                      <connectors>

                          <connector-ref connector-name="remote-jmsxa1" />

                          <connector-ref connector-name="remote-jmsxa2" />

                      </connectors>

                      <entries>

                          <entry name="java:/JmsXA" />

                      </entries>

                      <ha>true</ha>

                  </pooled-connection-factory>

              </jms-connection-factories>

              <jms-destinations>

                  <jms-queue name="testQueue">

                      <entry name="queue/test" />

                      <entry name="java:jboss/exported/jms/queue/test" />

                  </jms-queue>

                  <jms-topic name="testTopic">

                      <entry name="topic/test" />

                      <entry name="java:jboss/exported/jms/topic/test" />

                  </jms-topic>

              </jms-destinations>

          </hornetq-server>

      </subsystem>

       

      Here is the MDB's ejb-jar.xml (we are not using the MDB annotations at this time):

       

      <?xml version="1.0"?>

      <!DOCTYPE ejb-jar PUBLIC

      '-//Sun Microsystems, Inc.//DTD Enterprise JavaBeans 2.0//EN'

      'http://java.sun.com/dtd/ejb-jar_2_0.dtd'>

      <ejb-jar>

          <enterprise-beans>

              <message-driven>

                  <ejb-name>TestMDB</ejb-name>

                  <ejb-class>test.mdb.TestMDB</ejb-class>

                  <transaction-type>Container</transaction-type>

                  <activation-config>

                      <activation-config-property>

                          <activation-config-property-name>destinationType</activation-config-property-name>

                          <activation-config-property-value>javax.jms.Topic</activation-config-property-value>

                      </activation-config-property>

                      <activation-config-property>

                          <activation-config-property-name>destination</activation-config-property-name>

                          <activation-config-property-value>ndipiazza.pl.testing</activation-config-property-value>

                      </activation-config-property>

                      <activation-config-property>

                          <activation-config-property-name>hA</activation-config-property-name>

                          <activation-config-property-value>true</activation-config-property-value>

                      </activation-config-property>

                      <activation-config-property>

                          <activation-config-property-name>connectionParameters</activation-config-property-name>

                          <activation-config-property-value>host=192.168.16.129;port=5445,host=192.168.19.175;port=5445</activation-config-property-value>

                      </activation-config-property>

                      <activation-config-property>

                          <activation-config-property-name>connectorClassName</activation-config-property-name>

                          <activation-config-property-value>org.hornetq.core.remoting.impl.netty.NettyConnectorFactory,org.hornetq.core.remoting.impl.netty.NettyConnectorFactory</activation-config-property-value>

                      </activation-config-property>

                  </activation-config>

              </message-driven>

          </enterprise-beans>

      </ejb-jar>

       

      The MDB's java code is attached, but there isn't much to see there.

       

      The problem is... the MDB is never able to connect to either host. The log attached is from the MDB client server from when this is happening with Jboss 7.1.1.Final where logging on org.hornetq.ra is set to DEBUG.

       

      Once again, this WORKS if you simply specify only one host/port in connectionParameters and one connectorClassName.

       

      Basically, the following happens in a loop endlessly.

       

      11:57:51,010 DEBUG [org.hornetq.ra.inflow.HornetQActivation] (default-short-running-threads-threads - 1) Setting up org.hornetq.ra.inflow.HornetQActivationSpec(ra=org.hornetq.ra.HornetQResourceAdapter@2e91cb6 destination=ndipiazza.pl.testing destinationType=javax.jms.Topic ack=Auto-acknowledge durable=false clientID=null user=null maxSession=15)

      11:57:51,011 DEBUG [org.hornetq.ra.HornetQResourceAdapter] (default-short-running-threads-threads - 1) Creating Connection Factory on the resource adapter for transport=[Lorg.hornetq.api.core.TransportConfiguration;@2adb2c53 with ha=true

      11:57:51,012 DEBUG [org.hornetq.ra.recovery.RecoveryManager] (default-short-running-threads-threads - 1) registering recovery for factory : HornetQConnectionFactory [serverLocator=ServerLocatorImpl [initialConnectors=[org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5445&host=192-168-16-129, org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5445&host=192-168-19-175], discoveryGroupConfiguration=null], clientID=null, dupsOKBatchSize=1048576, transactionBatchSize=1048576, readOnly=false]

      11:57:51,016 DEBUG [org.hornetq.ra.inflow.HornetQActivation] (default-short-running-threads-threads - 1) Using context {java.naming.factory.url.pkgs=org.jboss.as.naming.interfaces:org.jboss.ejb.client.naming} for org.hornetq.ra.inflow.HornetQActivationSpec(ra=org.hornetq.ra.HornetQResourceAdapter@2e91cb6 destination=ndipiazza.pl.testing destinationType=javax.jms.Topic ack=Auto-acknowledge durable=false clientID=null user=null maxSession=15)

      11:57:51,017 DEBUG [org.hornetq.ra.inflow.HornetQActivation] (default-short-running-threads-threads - 1) Destination type defined as javax.jms.Topic

      11:57:51,018 DEBUG [org.hornetq.ra.inflow.HornetQActivation] (default-short-running-threads-threads - 1) Retrieving destination ndipiazza.pl.testing of type javax.jms.Topic

      11:57:51,019 DEBUG [org.hornetq.ra.inflow.HornetQActivation] (default-short-running-threads-threads - 1) Got destination HornetQTopic[ndipiazza.pl.testing] from ndipiazza.pl.testing

      11:57:51,075 DEBUG [org.hornetq.ra.HornetQResourceAdapter] (default-short-running-threads-threads - 1) Using queue connection DelegatingSession [session=ClientSessionImpl [name=f252f06b-e641-11e1-b09d-209e20524153, username=null, closed=false, factory = ClientSessionFactoryImpl [serverLocator=ServerLocatorImpl [initialConnectors=[org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5445&host=192-168-16-129, org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5445&host=192-168-19-175], discoveryGroupConfiguration=null], connectorConfig=org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5445&host=192-168-19-175, backupConfig=null], metaData=()]@2228d689]

      11:57:51,086 DEBUG [org.hornetq.ra.inflow.HornetQActivation] (default-short-running-threads-threads - 1) Using queue connection DelegatingSession [session=ClientSessionImpl [name=f252f06b-e641-11e1-b09d-209e20524153, username=null, closed=false, factory = ClientSessionFactoryImpl [serverLocator=ServerLocatorImpl [initialConnectors=[org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5445&host=192-168-16-129, org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5445&host=192-168-19-175], discoveryGroupConfiguration=null], connectorConfig=org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5445&host=192-168-19-175, backupConfig=null], metaData=(resource-adapter=inbound,jms-session=,)]@2228d689]

      11:57:51,096 INFO [org.jboss.as.server] (DeploymentScanner-threads - 2) JBAS018559: Deployed "eService.ear"

      11:57:51,116 DEBUG [org.hornetq.ra.HornetQResourceAdapter] (default-short-running-threads-threads - 1) Using queue connection DelegatingSession [session=ClientSessionImpl [name=f25a918c-e641-11e1-b09d-209e20524153, username=null, closed=false, factory = ClientSessionFactoryImpl [serverLocator=ServerLocatorImpl [initialConnectors=[org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5445&host=192-168-16-129, org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5445&host=192-168-19-175], discoveryGroupConfiguration=null], connectorConfig=org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5445&host=usdhgupta-ndipiazza-na, backupConfig=null], metaData=()]@307975b6]

      11:57:51,121 DEBUG [org.hornetq.ra.inflow.HornetQActivation] (default-short-running-threads-threads - 1) Using queue connection DelegatingSession [session=ClientSessionImpl [name=f25a918c-e641-11e1-b09d-209e20524153, username=null, closed=false, factory = ClientSessionFactoryImpl [serverLocator=ServerLocatorImpl [initialConnectors=[org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5445&host=192-168-16-129, org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5445&host=192-168-19-175], discoveryGroupConfiguration=null], connectorConfig=org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5445&host=usdhgupta-ndipiazza-na, backupConfig=null], metaData=(resource-adapter=inbound,jms-session=,)]@307975b6]

      11:57:51,128 INFO [org.hornetq.ra.inflow.HornetQActivation] (default-short-running-threads-threads - 1) awaiting topic/queue creation ndipiazza.pl.testing

      11:57:51,129 DEBUG [org.hornetq.ra.inflow.HornetQActivation] (default-short-running-threads-threads - 1) Tearing down org.hornetq.ra.inflow.HornetQActivationSpec(ra=org.hornetq.ra.HornetQResourceAdapter@2e91cb6 destination=ndipiazza.pl.testing destinationType=javax.jms.Topic ack=Auto-acknowledge durable=false clientID=null user=null maxSession=15)

      11:57:51,170 DEBUG [org.hornetq.ra.inflow.HornetQActivation] (default-short-running-threads-threads - 1) Tearing down complete org.hornetq.ra.inflow.HornetQActivation(spec=org.hornetq.ra.inflow.HornetQActivationSpec mepf=org.jboss.as.ejb3.inflow.JBossMessageEndpointFactory active=true destination=ndipiazza.pl.testing transacted=true)

      11:57:53,171 INFO [org.hornetq.ra.inflow.HornetQActivation] (default-short-running-threads-threads - 1) Attempting to reconnect org.hornetq.ra.inflow.HornetQActivationSpec(ra=org.hornetq.ra.HornetQResourceAdapter@2e91cb6 destination=ndipiazza.pl.testing destinationType=javax.jms.Topic ack=Auto-acknowledge durable=false clientID=null user=null maxSession=15)

       

      Any help would be appreciated!!!

       

      -Nicholas

       

      Message was edited by: Nicholas DiPiazza Removed useless logs added to ticket initially. Added hornetq-failover-issue.zip which has all details needed.

        • 1. Re: HQ 2.2.13 - Cannot seem to set up MDB with failover using hard coded list of hosts
          jbertram

          I set up a quick test on my local machine and everything worked as expected.  There were some issues with your configuration in AS 7.1.1.Final so I adjusted those as needed.  Here is the config I used for the two cluster nodes:

           

          <subsystem xmlns="urn:jboss:domain:messaging:1.1">
            <hornetq-server>
              <clustered>true</clustered>
              <security-enabled>false</security-enabled>
              <persistence-enabled>true</persistence-enabled>
              <journal-file-size>102400</journal-file-size>
              <journal-min-files>2</journal-min-files>
              <connectors>
                <netty-connector name="netty" socket-binding="messaging"/>
                <netty-connector name="netty-throughput" socket-binding="messaging-throughput">
                  <param key="batch-delay" value="50"/>
                </netty-connector>
                <in-vm-connector name="in-vm" server-id="0"/>
              </connectors>
              <acceptors>
                <netty-acceptor name="netty" socket-binding="messaging"/>
                <netty-acceptor name="netty-throughput" socket-binding="messaging-throughput">
                  <param key="batch-delay" value="50"/>
                  <param key="direct-deliver" value="false"/>
                </netty-acceptor>
                <in-vm-acceptor name="in-vm" server-id="0"/>
              </acceptors>
              <broadcast-groups>
                <broadcast-group name="bg-group1">
                  <group-address>231.7.7.7</group-address>
                  <group-port>9876</group-port>
                  <broadcast-period>5000</broadcast-period>
                  <connector-ref>netty</connector-ref>
                </broadcast-group>
              </broadcast-groups>
              <discovery-groups>
                <discovery-group name="dg-group1">
                  <group-address>231.7.7.7</group-address>
                  <group-port>9876</group-port>
                  <refresh-timeout>10000</refresh-timeout>
                </discovery-group>
              </discovery-groups>
              <cluster-connections>
                <cluster-connection name="my-cluster">
                  <address>jms</address>
                  <connector-ref>netty</connector-ref>
                  <discovery-group-ref discovery-group-name="dg-group1"/>
                </cluster-connection>
              </cluster-connections>
              <security-settings>
                <security-setting match="#">
                  <permission type="send" roles="guest"/>
                  <permission type="consume" roles="guest"/>
                  <permission type="createNonDurableQueue" roles="guest"/>
                  <permission type="deleteNonDurableQueue" roles="guest"/>
                </security-setting>
              </security-settings>
              <address-settings>
                <!--default for catch all-->
                <address-setting match="#">
                  <dead-letter-address>jms.queue.DLQ</dead-letter-address>
                  <expiry-address>jms.queue.ExpiryQueue</expiry-address>
                  <redelivery-delay>0</redelivery-delay>
                  <redistribution-delay>1000</redistribution-delay>
                  <max-size-bytes>10485760</max-size-bytes>
                  <address-full-policy>BLOCK</address-full-policy>
                  <message-counter-history-day-limit>10</message-counter-history-day-limit>
                </address-setting>
              </address-settings>
              <jms-connection-factories>
                <connection-factory name="InVmConnectionFactory">
                  <connectors>
                    <connector-ref connector-name="in-vm"/>
                  </connectors>
                  <entries>
                    <entry name="java:/ConnectionFactory"/>
                  </entries>
                </connection-factory>
                <connection-factory name="RemoteConnectionFactory">
                  <connectors>
                    <connector-ref connector-name="netty"/>
                  </connectors>
                  <entries>
                    <entry name="java:jboss/exported/jms/RemoteConnectionFactory"/>
                  </entries>
                  <ha>true</ha>
                </connection-factory>
                <pooled-connection-factory name="hornetq-ra">
                  <transaction mode="xa"/>
                  <connectors>
                    <connector-ref connector-name="in-vm"/>
                  </connectors>
                  <entries>
                    <entry name="java:/JmsXA"/>
                  </entries>
                </pooled-connection-factory>
              </jms-connection-factories>
              <jms-destinations>
                <jms-topic name="ndipiazza.pl.testing">
                  <entry name="ndipiazza.pl.testing"/>
                  <entry name="java:jboss/exported/ndipiazza.pl.testing"/>
                </jms-topic>
              </jms-destinations>
            </hornetq-server>
          </subsystem>
          

           

          I started the two cluster nodes like this (where forumIssue.xml was a copy of standalone-full-ha.xml with the aforementioned messaging subsystem):

           

          ./standalone.sh -c forumIssue.xml -b 127.0.0.1 -bmanagement 127.0.0.1
          ./standalone.sh -c forumIssue.xml -b 127.0.0.2 -bmanagement 127.0.0.2
          

           

          Then I deployed this MDB to a third instance:

           

          import javax.ejb.ActivationConfigProperty;
          import javax.ejb.MessageDriven;
          import javax.jms.Message;
          import javax.jms.MessageListener;
          
          @MessageDriven(name = "MDB_CMT_TxRequiredExample", activationConfig = { @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Topic"),
                  @ActivationConfigProperty(propertyName = "destination", propertyValue = "ndipiazza.pl.testing"),
                  @ActivationConfigProperty(propertyName = "setupAttempts", propertyValue = "-1"),
                  @ActivationConfigProperty(propertyName = "hA", propertyValue = "true"),
                  @ActivationConfigProperty(propertyName = "connectorClassName", propertyValue = "org.hornetq.core.remoting.impl.netty.NettyConnectorFactory,org.hornetq.core.remoting.impl.netty.NettyConnectorFactory"),        
                  @ActivationConfigProperty(propertyName = "connectionParameters", propertyValue = "host=127.0.0.1;port=5445,host=127.0.0.2;port=5445")})
          public class MDB_CMT_TxRequiredExample implements MessageListener
          { 
             public void onMessage(final Message message)
             {
                   System.out.println("Got message: " + message);
             }
          }
          

           

          I started the third instance like this:

           

          ./standalone.sh -c standalone-full.xml -b 127.0.0.3 -bmanagement 127.0.0.3
          

           

          When I sent a JMS message to ndipiazza.pl.testing on 127.0.0.1 I got this on the server hosting the MDB as expected:

           

          13:12:23,572 INFO  [stdout] (Thread-16 (HornetQ-client-global-threads-1542054)) Got message: HornetQMessage[ID:c301dc55-e704-11e1-bb17-0024d7397f8c]:PERSISTENT
          

           

          I don't have multiple machines available to perform a real over-the-network test, but this kind of test has typically been sufficient in the 4 years or so I've been doing JMS work.  Does a local test like this work for you as well?

          1 of 1 people found this helpful
          • 2. Re: HQ 2.2.13 - Cannot seem to set up MDB with failover using hard coded list of hosts
            ndipiazza

            I confirmed, when I ran the nodes on the same machine, I was able to successfully deploy my MDB client with failover.

             

            Then I repeated the exact same test (exact same JBoss version and configuration files spread out on 3 different machines). This caused the problem to occur.

             

            I tried then a different test... where the JMS server nodes were on the same machine, but the JMS client node was on a different machine. This WORKED.

             

            So the problem is only when the HornetQ producer server nodes are on the different machines.

             

            I turned up the logging for org.hornetq to TRACE, and reproduced the problem.

             

            Attaching is a zip file "hornetq-failover-issue.zip" (https://community.jboss.org/servlet/JiveServlet/download/753824-64324/hornetq-failover-issue.zip) with:

             

            1. hornetq-client-node's logs and standalone-full-ha.xml
            2. hornetq-server-node-a's logs and standalone-full-ha.xml
            3. hornetq-server-node-b's logs and standalone-full-ha.xml
            4. hq-messaging-test.war
            5. src for the MDB

             

            Steps to reproduce are simple... You have Box A, B and C

             

            1. Install JBoss 7.1.1.Final on Box A
            2. Run Box A with hornetq-server-node-a's standalone-full-ha.xml
            3. Install JBoss 7.1.1.Final on Box B
            4. Run Box B with hornetq-server-node-b's standalone-full-ha.xml
            5. Install JBoss 7.1.1.Final on Box C
            6. Run Box C with hornetq-client-node's standalone-full-ha.xml
            7. Deploy hq-messaging-test.war on Box C.
            8. You will see the loop of failures to connect to the Topic in the logs on Box C at this time.
            • 3. Re: HQ 2.2.13 - Cannot seem to set up MDB with failover using hard coded list of hosts
              jbertram

              After taking a quick look at the logs I see this on node-a:

               

              16:11:44,473 TRACE [org.hornetq.core.protocol.core.impl.RemotingConnectionImpl] (Old I/O server worker (parentId: 711460, [id: 0x000adb24, /192.168.2.6:5445])) handling packet PACKET(SessionCreateConsumerMessage)[type=40, channelID=10, packetObject=SessionCreateConsumerMessage, queueName=a621bfdc-a949-4070-afa3-03adecd5bb6e, filterString=null]
              16:11:44,474 DEBUG [org.hornetq.core.protocol.core.ServerSessionPacketHandler] (Old I/O server worker (parentId: 711460, [id: 0x000adb24, /192.168.2.6:5445])) Sending exception to client: HornetQException[errorCode=100 message=Queue a621bfdc-a949-4070-afa3-03adecd5bb6e does not exist]
                        at org.hornetq.core.server.impl.ServerSessionImpl.createConsumer(ServerSessionImpl.java:339) [hornetq-core-2.2.13.Final.jar:]
                        at org.hornetq.core.protocol.core.ServerSessionPacketHandler.handlePacket(ServerSessionPacketHandler.java:214) [hornetq-core-2.2.13.Final.jar:]
                        at org.hornetq.core.protocol.core.impl.ChannelImpl.handlePacket(ChannelImpl.java:508) [hornetq-core-2.2.13.Final.jar:]
                        at org.hornetq.core.protocol.core.impl.RemotingConnectionImpl.doBufferReceived(RemotingConnectionImpl.java:556) [hornetq-core-2.2.13.Final.jar:]
                        at org.hornetq.core.protocol.core.impl.RemotingConnectionImpl.bufferReceived(RemotingConnectionImpl.java:517) [hornetq-core-2.2.13.Final.jar:]
                        at org.hornetq.core.remoting.server.impl.RemotingServiceImpl$DelegatingBufferHandler.bufferReceived(RemotingServiceImpl.java:533) [hornetq-core-2.2.13.Final.jar:]
                        at org.hornetq.core.remoting.impl.netty.HornetQChannelHandler.messageReceived(HornetQChannelHandler.java:73) [hornetq-core-2.2.13.Final.jar:]
                        at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:100) [netty-3.2.6.Final.jar:]
                        at org.jboss.netty.channel.StaticChannelPipeline.sendUpstream(StaticChannelPipeline.java:372) [netty-3.2.6.Final.jar:]
                        at org.jboss.netty.channel.StaticChannelPipeline$StaticChannelHandlerContext.sendUpstream(StaticChannelPipeline.java:534) [netty-3.2.6.Final.jar:]
                        at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:287) [netty-3.2.6.Final.jar:]
                        at org.hornetq.core.remoting.impl.netty.HornetQFrameDecoder2.decode(HornetQFrameDecoder2.java:169) [hornetq-core-2.2.13.Final.jar:]
                        at org.hornetq.core.remoting.impl.netty.HornetQFrameDecoder2.messageReceived(HornetQFrameDecoder2.java:134) [hornetq-core-2.2.13.Final.jar:]
                        at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:80) [netty-3.2.6.Final.jar:]
                        at org.jboss.netty.channel.StaticChannelPipeline.sendUpstream(StaticChannelPipeline.java:372) [netty-3.2.6.Final.jar:]
                        at org.jboss.netty.channel.StaticChannelPipeline.sendUpstream(StaticChannelPipeline.java:367) [netty-3.2.6.Final.jar:]
                        at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:274) [netty-3.2.6.Final.jar:]
                        at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:261) [netty-3.2.6.Final.jar:]
                        at org.jboss.netty.channel.socket.oio.OioWorker.run(OioWorker.java:100) [netty-3.2.6.Final.jar:]
                        at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) [netty-3.2.6.Final.jar:]
                        at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44) [netty-3.2.6.Final.jar:]
                        at org.jboss.netty.util.VirtualExecutorService$ChildExecutorRunnable.run(VirtualExecutorService.java:181) [netty-3.2.6.Final.jar:]
                        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) [rt.jar:1.6.0_24]
                        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) [rt.jar:1.6.0_24]
                        at java.lang.Thread.run(Thread.java:679) [rt.jar:1.6.0_24]
              

               

              And then this on the client:

               

              16:11:45,213 TRACE [org.hornetq.core.protocol.core.impl.RemotingConnectionImpl] (Old I/O client worker ([id: 0x201a755e, /192.168.2.2:58451 => /192.168.2.6:5445])) handling packet PACKET(HornetQExceptionMessage)[type=20, channelID=10, packetObject=HornetQExceptionMessage, exception= HornetQException[errorCode=100 message=Queue a621bfdc-a949-4070-afa3-03adecd5bb6e does not exist]]
              

               

              So the client is connecting to the server, but the server is telling the client that it can't find the proper queue.

               

              To be clear, a topic subscription is represented internally by a HornetQ queue so while the message may seem odd it is reporting the error in the proper terms from what I can tell.  I'll investigate further to see why the internal HornetQ queue doesn't exist.

              • 4. Re: HQ 2.2.13 - Cannot seem to set up MDB with failover using hard coded list of hosts
                ndipiazza

                Hi Justin,

                 

                First of all, I changed the JBoss AS7 consumer node to use standalone-full.xml instead of standalone-ha-full.xml. This made no difference.

                 

                Then you had said the other day "I'm particularly interested in line 337 of org.hornetq.core.server.impl.ServerSessionImpl#createConsumer"

                 

                So I did the following:

                 

                1) Started JBoss AS7 servers (not the consumer yet)

                2) Started remote debugging of Server node A, added the breakpoint @ line 337 of org.hornetq.core.server.impl.ServerSessionImpl#createConsumer

                3) Started JBoss AS7 consumer server

                 

                This breakpoint was never hit.

                 

                I put a breakpoint at the first line of org.hornetq.core.protocol.core.ServerSessionPacketHandler.handlePacket(Packet)   just to verify the debugger is working... and I see something strange.

                 

                I only get one kind of packet coming in, and that is of type org.hornetq.core.protocol.core.impl.PacketImpl.SESS_XA_INDOUBT_XIDS

                 

                It only comes in about once every minute or two.

                 

                This is pretty contradictory of what we thought was going to happen. Any ideas?

                • 5. Re: HQ 2.2.13 - Cannot seem to set up MDB with failover using hard coded list of hosts
                  jbertram

                  This breakpoint was never hit.

                  As I see it, that would mean you never saw this in server A's log (the queue's id would be different of course):

                   

                  DEBUG [org.hornetq.core.protocol.core.ServerSessionPacketHandler] (Old I/O server worker (parentId: 711460, [id: 0x000adb24, /192.168.2.6:5445])) Sending exception to client: HornetQException[errorCode=100 message=Queue a621bfdc-a949-4070-afa3-03adecd5bb6e does not exist]
                  

                   

                  Is that true?

                   

                  If so, I would conclude that the client wasn't attempting to connect to that server.

                  • 6. Re: HQ 2.2.13 - Cannot seem to set up MDB with failover using hard coded list of hosts
                    ndipiazza

                    DHCP had changed my IP address and I forgot to update the MDB's annotations. Sorry about that Justin.

                     

                    I added a breakpoint at ServerSessionImpl.createConsumer(long, SimpleString, SimpleString, boolean) line: 339. Here is stack at that point:

                     

                     

                    thisServerSessionImpl  (id=555)

                    autoCommitAcksfalse

                    autoCommitSendsfalse

                    callbackCoreSessionCallback  (id=567)

                    consumersConcurrentHashMap<K,V>  (id=568)

                    creationTime1345762702419

                    currentLargeMessagenull

                    defaultAddressnull

                    managementAddressSimpleString  (id=290)

                    managementServiceManagementServiceImpl  (id=291)

                    metaDataHashMap<K,V>  (id=571)

                    minLargeMessageSize102400

                    name"088b5b67-ed76-11e1-bddd-d6e620524153" (id=572)

                    passwordnull

                    postOfficePostOfficeImpl  (id=270)

                    preAcknowledgefalse

                    remotingConnectionRemotingConnectionImpl  (id=558)

                    resourceManagerResourceManagerImpl  (id=275)

                    routingContextRoutingContextImpl  (id=573)

                    securityStoreSecurityStoreImpl  (id=282)

                    serverHornetQServerImpl  (id=246)

                    sessionContextOperationContextImpl  (id=574)

                    startedfalse

                    storageManagerJournalStorageManager  (id=264)

                    strictUpdateDeliveryCountfalse

                    targetAddressInfosHashMap<K,V>  (id=575)

                    tempQueueCleannerUppersHashMap<K,V>  (id=576)

                    timeoutSeconds300

                    txnull

                    usernamenull

                    xatrue

                     

                     

                    consumerID0
                    queueNameSimpleString  (id=560)

                    data (id=577)

                    hash0

                    str"da4bf44f-0deb-4a75-888e-40453023e4fd" (id=563)
                    filterStringnull
                    browseOnlyfalse
                    bindingnull

                     

                     

                     

                    So obviously postOffice.getBinding(name) is turning up null. I entered this method and realized it's looking it up from a postOffice.addressManager.nameMap. Here is that nameMap ConcurrentHashMap:

                     

                    nameMap{sf.my-cluster.bafb4008-e70b-11e1-acac-0026187af0eb=LocalQueueBinding [address=sf.my-cluster.bafb4008-e70b-11e1-acac-0026187af0eb, queue=QueueImpl[name=sf.my-cluster.bafb4008-e70b-11e1-acac-0026187af0eb, postOffice=PostOfficeImpl [server=HornetQServerImpl::serverUUID=6e080025-e70b-11e1-9454-001aa0bdd9cd]]@3b76982e, filter=null, name=sf.my-cluster.bafb4008-e70b-11e1-acac-0026187af0eb, clusterName=sf.my-cluster.bafb4008-e70b-11e1-acac-0026187af0eb6e080025-e70b-11e1-9454-001aa0bdd9cd], jms.topic.ndipiazza.pl.testing=LocalQueueBinding [address=jms.topic.ndipiazza.pl.testing, queue=QueueImpl[name=jms.topic.ndipiazza.pl.testing, postOffice=PostOfficeImpl [server=HornetQServerImpl::serverUUID=6e080025-e70b-11e1-9454-001aa0bdd9cd]]@6db38815, filter=FilterImpl [sfilterString=__HQX=-1], name=jms.topic.ndipiazza.pl.testing, clusterName=jms.topic.ndipiazza.pl.testing6e080025-e70b-11e1-9454-001aa0bdd9cd], jms.topic.ndipiazza.pl.testinge12c5f6f-e7f9-11e1-9fb7-0026187af0eb=RemoteQueueBindingImpl [address=jms.topic.ndipiazza.pl.testing, consumerCount=0, distance=1, filters=[], id=948, idsHeaderName=_HQ_ROUTE_TOsf.my-cluster.e12c5f6f-e7f9-11e1-9fb7-0026187af0eb, queueFilter=FilterImpl [sfilterString=__HQX=-1], remoteQueueID=8589934873, routingName=jms.topic.ndipiazza.pl.testing, storeAndForwardQueue=QueueImpl[name=sf.my-cluster.e12c5f6f-e7f9-11e1-9fb7-0026187af0eb, postOffice=PostOfficeImpl [server=HornetQServerImpl::serverUUID=6e080025-e70b-11e1-9454-001aa0bdd9cd]]@7559ec47, uniqueName=jms.topic.ndipiazza.pl.testinge12c5f6f-e7f9-11e1-9fb7-0026187af0eb], notif.8be22112-ed73-11e1-9db1-0026187af0eb.HornetQServerImpl::serverUUID=e12c5f6f-e7f9-11e1-9fb7-0026187af0eb=LocalQueueBinding [address=hornetq.notifications, queue=QueueImpl[name=notif.8be22112-ed73-11e1-9db1-0026187af0eb.HornetQServerImpl::serverUUID=e12c5f6f-e7f9-11e1-9fb7-0026187af0eb, postOffice=PostOfficeImpl [server=HornetQServerImpl::serverUUID=6e080025-e70b-11e1-9454-001aa0bdd9cd]]@22aedd3, filter=FilterImpl [sfilterString=_HQ_Binding_Type<>2 AND _HQ_NotifType IN ('BINDING_ADDED','BINDING_REMOVED','CONSUMER_CREATED','CONSUMER_CLOSED','PROPOSAL','PROPOSAL_RESPONSE') AND _HQ_Distance<1 AND (_HQ_Address LIKE 'jms%')], name=notif.8be22112-ed73-11e1-9db1-0026187af0eb.HornetQServerImpl::serverUUID=e12c5f6f-e7f9-11e1-9fb7-0026187af0eb, clusterName=notif.8be22112-ed73-11e1-9db1-0026187af0eb.HornetQServerImpl::serverUUID=e12c5f6f-e7f9-11e1-9fb7-0026187af0eb6e080025-e70b-11e1-9454-001aa0bdd9cd], sf.my-cluster.e12c5f6f-e7f9-11e1-9fb7-0026187af0eb=LocalQueueBinding [address=sf.my-cluster.e12c5f6f-e7f9-11e1-9fb7-0026187af0eb, queue=QueueImpl[name=sf.my-cluster.e12c5f6f-e7f9-11e1-9fb7-0026187af0eb, postOffice=PostOfficeImpl [server=HornetQServerImpl::serverUUID=6e080025-e70b-11e1-9454-001aa0bdd9cd]]@7559ec47, filter=null, name=sf.my-cluster.e12c5f6f-e7f9-11e1-9fb7-0026187af0eb, clusterName=sf.my-cluster.e12c5f6f-e7f9-11e1-9fb7-0026187af0eb6e080025-e70b-11e1-9454-001aa0bdd9cd]}
                    • 7. Re: HQ 2.2.13 - Cannot seem to set up MDB with failover using hard coded list of hosts
                      hgupta

                      Hi guys - any update on this? We are blocked on our GA release because of this issue.

                      • 8. Re: HQ 2.2.13 - Cannot seem to set up MDB with failover using hard coded list of hosts
                        ndipiazza

                        Justin is out for two weeks. I'm looking to make a branch and try to fix this tonight. Himanshu I would go forward using a workaround for the time being. This is a pretty embedded issue.

                         

                        I will update you as soon as I know anything.

                        • 9. Re: HQ 2.2.13 - Cannot seem to set up MDB with failover using hard coded list of hosts
                          ndipiazza

                          Justin and Himanshu,

                           

                          I debugged the code and have located the issue.

                           

                          I was able to reproduce the problem with all the jboss instances on the same box, so it turned out not to be a remote queue issue. The reason we had thought it was a remote issue is because the error message is much different when reproducing the problem on a local versus a remote cluster. That might be worth opening up a ticket for, but I doubt it.

                           

                          The problematic code was on the server between Lines 103 and 176 of org/hornetq/ra/inflow/HornetQMessageHandler.java

                           

                          
                          
                          if (activation.isTopic() && spec.isSubscriptionDurable())
                          
                          
                          { // our mdb is not durable so we will hit the else clause
                          
                           // ....
                          } else {
                           SimpleString queueName;
                                   if (activation.isTopic())
                                   {
                                      if (activation.getTopicTemporaryQueue() == null)
                                      {
                                         queueName = new SimpleString(UUID.randomUUID().toString());
                                         session.createQueue(activation.getAddress(), queueName, selectorString, false);
                                         activation.setTopicTemporaryQueue(queueName);
                                      }
                                      else
                                      {
                                         queueName = activation.getTopicTemporaryQueue();
                                      }
                                   }
                                   else
                                   {
                                      queueName = activation.getAddress();
                                   }
                                   consumer = session.createConsumer(queueName, selectorString);
                          

                           

                          Remember my MDB did not configure the topic to be durable, and it is not a topic temporary queue. So at this point when attempting to locate the queue, JBoss will create its own UUID for the Topic. But that makes the queue name, my only means of identifying the queue, set to some random value, which seems totally wrong. This explains why we kept seeing all the UUID keys in the postOffice during my debug session i posted about earlier.

                           

                          I found a work-around we can use. Just make the changes to the MDB so that the if statement at 103 will evaluate true.

                           

                          The values you will need to change are 3 attributes on the MDB: clientID, subscriptionName, and subscriptionDurability. Here is my TestMDB.java's annotation list now:

                           

                          @MessageDriven(name = "MDB_CMT_TxRequiredExample", activationConfig = { @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Topic"),
                                  @ActivationConfigProperty(propertyName = "destination", propertyValue = "ndipiazza.pl.testing"),
                                  @ActivationConfigProperty(propertyName = "setupAttempts", propertyValue = "-1"),
                                  @ActivationConfigProperty(propertyName = "hA", propertyValue = "true"),
                                  @ActivationConfigProperty(propertyName = "clientID", propertyValue = "ndipiazza.pl.testing"),
                                  @ActivationConfigProperty(propertyName = "subscriptionName", propertyValue = "ndipiazza.pl.testing"),        
                                  @ActivationConfigProperty(propertyName = "connectorClassName", propertyValue = "org.hornetq.core.remoting.impl.netty.NettyConnectorFactory,org.hornetq.core.remoting.impl.netty.NettyConnectorFactory"),        
                                  @ActivationConfigProperty(propertyName = "connectionParameters", propertyValue = "host=192.168.2.6;port=5445,host=192.168.2.6;port=5545"),
                                  @ActivationConfigProperty(propertyName = "subscriptionDurability", propertyValue ="Durable")})
                          
                          public class TestMDB implements MessageDrivenBean, MessageListener
                          

                           

                          Now that we have set these new annotations, this part of the code:

                           

                          if (activation.isTopic() && spec.isSubscriptionDurable())
                                {
                                   String subscriptionName = spec.getSubscriptionName();
                                   String clientID = spec.getClientID();
                          
                                   // Durable sub
                                   if (clientID == null)
                                   {
                                      throw new InvalidClientIDException("Cannot create durable subscription for " + subscriptionName +
                                                                         " - client ID has not been set");
                                   }
                          
                                   SimpleString queueName = new SimpleString(HornetQDestination.createQueueNameForDurableSubscription(clientID,
                                                                                                                                      subscriptionName));
                          
                                   QueueQuery subResponse = session.queueQuery(queueName);
                          

                           

                          will now load up the topic correctly.

                           

                          I tested this, and it works. I was able to set up a redundant pair of HornetQ servers... and sent some messages to my topic queues using this simple java program:

                           

                          import java.util.HashMap;
                          import java.util.Map;
                          
                          import javax.jms.Connection;
                          import javax.jms.ConnectionFactory;
                          import javax.jms.MessageConsumer;
                          import javax.jms.MessageProducer;
                          import javax.jms.Queue;
                          import javax.jms.Session;
                          import javax.jms.TextMessage;
                          import javax.jms.Topic;
                          
                          import org.hornetq.api.core.TransportConfiguration;
                          import org.hornetq.api.core.client.ClientSessionFactory;
                          import org.hornetq.api.jms.HornetQJMSClient;
                          import org.hornetq.api.jms.JMSFactoryType;
                          import org.hornetq.core.client.impl.ClientSessionFactoryImpl;
                          import org.hornetq.core.remoting.impl.invm.InVMConnectorFactory;
                          import org.hornetq.core.remoting.impl.netty.NettyConnectorFactory;
                          import org.hornetq.jms.client.HornetQConnectionFactory;
                          
                          
                          
                          public class PublishMessage {
                          
                              public static void main(String[] args) throws Exception
                              {
                                 Connection connection = null;
                                 try
                                 {
                                     // Step 1. Directly instantiate the JMS Queue object.
                                     Topic topic = HornetQJMSClient.createTopic("ndipiazza.pl.testing");
                          
                                     // Step 2. Instantiate the TransportConfiguration object which contains the knowledge of what transport to use,
                                     // The server port etc.
                          
                                     Map<String, Object> connectionParams = new HashMap<String, Object>();
                                     connectionParams.put("port", 5445);
                                     connectionParams.put("host", "192.168.2.6");
                          
                                     TransportConfiguration transportConfiguration = new TransportConfiguration(
                                             NettyConnectorFactory.class.getName(), connectionParams);
                          
                                     // Step 3 Directly instantiate the JMS ConnectionFactory object
                                     // using that TransportConfiguration
                                     HornetQConnectionFactory cf = new HornetQConnectionFactory(false, transportConfiguration);
                          
                                     // Step 4.Create a JMS Connection
                                     connection = cf.createConnection();
                          
                                     // Step 5. Create a JMS Session
                                     Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
                          
                                     // Step 6. Create a JMS Message Producer
                                     MessageProducer producer = session.createProducer(topic);
                          
                                     // Step 7. Create a Text Message
                                     TextMessage message = session.createTextMessage("This is a text message");
                          
                                     System.out.println("Sent message: " + message.getText());
                          
                                     // Step 8. Send the Message
                                     producer.send(message);
                          
                                     producer.close();
                          
                                     connection.close();
                                 }
                                 finally
                                 {
                                    if (connection != null)
                                    {
                                       connection.close();
                                    }
                                 }
                              }
                          }
                          

                           

                          Pom:

                           

                          <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                              xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
                              <modelVersion>4.0.0</modelVersion>
                              <groupId>com.ndipiazza</groupId>
                              <artifactId>message-publisher</artifactId>
                              <version>0.0.1-SNAPSHOT</version>
                              <dependencies>
                                  <dependency>
                                      <groupId>org.jboss.spec.javax.jms</groupId>
                                      <artifactId>jboss-jms-api_1.1_spec</artifactId>
                                      <version>1.0.1.Final</version>
                                  </dependency>
                                  <dependency>
                                      <groupId>org.hornetq</groupId>
                                      <artifactId>hornetq-jms-client</artifactId>
                                      <version>2.2.13.Final</version>
                                  </dependency>
                                  <dependency>
                                      <groupId>org.hornetq</groupId>
                                      <artifactId>hornetq-core-client</artifactId>
                                      <version>2.2.13.Final</version>
                                  </dependency>
                              </dependencies>
                          </project>
                          

                           

                          Upon sending a message to the queue, I have verified the subscriber jboss instance recieves and prints the message.

                           

                          Justin - should I create an issue tracker for this problem? Or is this how that non-durable topic is supposed to act? It doesn't seem right to me.

                          • 10. Re: HQ 2.2.13 - Cannot seem to set up MDB with failover using hard coded list of hosts
                            hgupta

                            Hi Nicholas,

                             

                            Firstly, thanks for all the help. To test out the changes that you mentioned on community to make MDB durable, I still see the same problem after making MDB durable. This is the error I am getting:

                             

                            11:36:18,032 INFO  [stdout] (Thread-22 (HornetQ-client-global-threads-142188355))
                            11:36:18,034 INFO  [stdout] (Thread-22 (HornetQ-client-global-threads-142188355)) ************************* In onMessage of TestMDB : 19
                            11:36:18,060 WARN  [org.hornetq.jms.server.recovery.HornetQXAResourceWrapper] (Thread-23 (HornetQ-client-global-threads-142188355)) Notified of connection failure in xa recovery co
                            nnectionFactory for provider ClientSessionFactoryImpl [serverLocator=ServerLocatorImpl [initialConnectors=[org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5445&host
                            =10-10-16-64, org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5445&host=10-10-19-175], discoveryGroupConfiguration=null], connectorConfig=org-hornetq-core-remoting-
                            impl-netty-NettyConnectorFactory?port=5445&host=v39w3, backupConfig=null] will attempt reconnect on next pass: HornetQException[errorCode=0 message=Netty exception]
                             at org.hornetq.core.remoting.impl.netty.HornetQChannelHandler.exceptionCaught(HornetQChannelHandler.java:108) [hornetq-core-2.2.13.Final.jar:]
                             at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:142) [netty-3.2.6.Final.jar:]
                             at org.jboss.netty.channel.StaticChannelPipeline.sendUpstream(StaticChannelPipeline.java:372) [netty-3.2.6.Final.jar:]
                             at org.jboss.netty.channel.StaticChannelPipeline$StaticChannelHandlerContext.sendUpstream(StaticChannelPipeline.java:534) [netty-3.2.6.Final.jar:]
                             at org.jboss.netty.channel.SimpleChannelUpstreamHandler.exceptionCaught(SimpleChannelUpstreamHandler.java:148) [netty-3.2.6.Final.jar:]
                             at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:122) [netty-3.2.6.Final.jar:]
                             at org.jboss.netty.channel.StaticChannelPipeline.sendUpstream(StaticChannelPipeline.java:372) [netty-3.2.6.Final.jar:]
                             at org.jboss.netty.channel.StaticChannelPipeline.sendUpstream(StaticChannelPipeline.java:367) [netty-3.2.6.Final.jar:]
                             at org.jboss.netty.channel.Channels.fireExceptionCaught(Channels.java:432) [netty-3.2.6.Final.jar:]
                             at org.jboss.netty.channel.socket.oio.OioWorker.run(OioWorker.java:95) [netty-3.2.6.Final.jar:]
                             at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) [netty-3.2.6.Final.jar:]
                                   at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44) [netty-3.2.6.Final.jar:]
                             at org.jboss.netty.util.VirtualExecutorService$ChildExecutorRunnable.run(VirtualExecutorService.java:181) [netty-3.2.6.Final.jar:]
                             at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) [rt.jar:1.7.0_03]
                             at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) [rt.jar:1.7.0_03]
                             at java.lang.Thread.run(Thread.java:722) [rt.jar:1.7.0_03]
                            Caused by: java.net.SocketException: Connection reset
                             at java.net.SocketInputStream.read(SocketInputStream.java:189) [rt.jar:1.7.0_03]
                             at java.net.SocketInputStream.read(SocketInputStream.java:121) [rt.jar:1.7.0_03]
                             at java.net.SocketInputStream.read(SocketInputStream.java:203) [rt.jar:1.7.0_03]
                             at java.io.FilterInputStream.read(FilterInputStream.java:83) [rt.jar:1.7.0_03]
                             at java.io.PushbackInputStream.read(PushbackInputStream.java:139) [rt.jar:1.7.0_03]
                             at org.jboss.netty.channel.socket.oio.OioWorker.run(OioWorker.java:86) [netty-3.2.6.Final.jar:]
                             ... 4 more
                            
                            11:36:28,056 WARN  [org.hornetq.core.client.impl.ClientConsumerImpl] (Thread-17 (HornetQ-client-global-threads-142188355)) Timed out waiting for handler to complete processing
                            11:36:28,064 WARN  [org.hornetq.core.client.impl.ClientSessionImpl] (Thread-22 (HornetQ-client-global-threads-142188355)) failover occured during commit throwing XAException.XA_RET
                            RY
                            11:36:28,073 WARN  [com.arjuna.ats.jta] (Thread-22 (HornetQ-client-global-threads-142188355)) ARJUNA016039: onePhaseCommit on < formatId=131077, gtrid_length=29, bqual_length=36, t
                            x_uid=0:ffff0a0a103e:3494a78a:504649d6:50, node_name=1, branch_uid=0:ffff0a0a103e:3494a78a:504649d6:51, subordinatenodename=null, eis_name=unknown eis name > (DelegatingSession [se
                            ssion=ClientSessionImpl [name=3ea62e36-f6bf-11e1-916b-6c6220524153, username=null, closed=true, factory = ClientSessionFactoryImpl [serverLocator=ServerLocatorImpl [initialConnecto
                            rs=[org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5445&host=10-10-16-64, org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5445&host=10-10-19-175],
                            discoveryGroupConfiguration=null], connectorConfig=org-hornetq-core-remoting-impl-netty-NettyConnectorFactory?port=5445&host=v39w3, backupConfig=null], metaData=(jms-client-id=egai
                            n.pl.testing,resource-adapter=inbound,jms-session=,)]@45dab09f]) failed with exception XAException.XA_RETRY: javax.transaction.xa.XAException
                            
                             at org.hornetq.core.client.impl.ClientSessionImpl.commit(ClientSessionImpl.java:1329)
                             at org.hornetq.core.client.impl.DelegatingSession.commit(DelegatingSession.java:163)
                             at com.arjuna.ats.internal.jta.resources.arjunacore.XAResourceRecord.topLevelOnePhaseCommit(XAResourceRecord.java:667)
                             at com.arjuna.ats.arjuna.coordinator.BasicAction.onePhaseCommit(BasicAction.java:2283)
                             at com.arjuna.ats.arjuna.coordinator.BasicAction.End(BasicAction.java:1466)
                             at com.arjuna.ats.arjuna.coordinator.TwoPhaseCoordinator.end(TwoPhaseCoordinator.java:98)
                             at com.arjuna.ats.arjuna.AtomicAction.commit(AtomicAction.java:164)
                             at com.arjuna.ats.internal.jta.transaction.arjunacore.TransactionImple.commitAndDisassociate(TransactionImple.java:1165)
                             at com.arjuna.ats.internal.jta.transaction.arjunacore.BaseTransaction.commit(BaseTransaction.java:117)
                             at com.arjuna.ats.jbossatx.BaseTransactionManagerDelegate.commit(BaseTransactionManagerDelegate.java:75)
                             at org.jboss.as.ejb3.inflow.MessageEndpointInvocationHandler.afterDelivery(MessageEndpointInvocationHandler.java:72) [jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
                             at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source) [:1.7.0_03]
                             at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.7.0_03]
                             at java.lang.reflect.Method.invoke(Method.java:601) [rt.jar:1.7.0_03]
                             at org.jboss.as.ejb3.inflow.AbstractInvocationHandler.handle(AbstractInvocationHandler.java:60) [jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
                             at org.jboss.as.ejb3.inflow.MessageEndpointInvocationHandler.doInvoke(MessageEndpointInvocationHandler.java:136) [jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
                             at org.jboss.as.ejb3.inflow.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:73) [jboss-as-ejb3-7.1.1.Final.jar:7.1.1.Final]
                             at $Proxy12.afterDelivery(Unknown Source) at org.hornetq.ra.inflow.HornetQMessageHandler.onMessage(HornetQMessageHandler.java:287)
                             at org.hornetq.core.client.impl.ClientConsumerImpl.callOnMessage(ClientConsumerImpl.java:983)
                             at org.hornetq.core.client.impl.ClientConsumerImpl.access$400(ClientConsumerImpl.java:48)
                             at org.hornetq.core.client.impl.ClientConsumerImpl$Runner.run(ClientConsumerImpl.java:1113)
                             at org.hornetq.utils.OrderedExecutorFactory$OrderedExecutor$1.run(OrderedExecutorFactory.java:100)
                             at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) [rt.jar:1.7.0_03]
                             at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) [rt.jar:1.7.0_03]
                             at java.lang.Thread.run(Thread.java:722) [rt.jar:1.7.0_03]
                            
                            11:36:28,114 WARN  [org.hornetq.ra.inflow.HornetQMessageHandler] (Thread-22 (HornetQ-client-global-threads-142188355)) Unable to call after delivery: javax.resource.spi.LocalTransa
                            ctionException: javax.transaction.RollbackException: ARJUNA016053: Could not commit transaction.
                             at org.jboss.as.ejb3.inflow.MessageEndpointInvocationHandler.afterDelivery(MessageEndpointInvocationHandler.java:88)
                             at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source) [:1.7.0_03]
                             at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.7.0_03]
                             at java.lang.reflect.Method.invoke(Method.java:601) [rt.jar:1.7.0_03]
                             at org.jboss.as.ejb3.inflow.AbstractInvocationHandler.handle(AbstractInvocationHandler.java:60)
                             at org.jboss.as.ejb3.inflow.MessageEndpointInvocationHandler.doInvoke(MessageEndpointInvocationHandler.java:136)
                             at org.jboss.as.ejb3.inflow.AbstractInvocationHandler.invoke(AbstractInvocationHandler.java:73)
                             at $Proxy12.afterDelivery(Unknown Source) at org.hornetq.ra.inflow.HornetQMessageHandler.onMessage(HornetQMessageHandler.java:287) [hornetq-ra-2.2.13.Final.jar:]
                             at org.hornetq.core.client.impl.ClientConsumerImpl.callOnMessage(ClientConsumerImpl.java:983) [hornetq-core-2.2.13.Final.jar:]
                             at org.hornetq.core.client.impl.ClientConsumerImpl.access$400(ClientConsumerImpl.java:48) [hornetq-core-2.2.13.Final.jar:]
                             at org.hornetq.core.client.impl.ClientConsumerImpl$Runner.run(ClientConsumerImpl.java:1113) [hornetq-core-2.2.13.Final.jar:]
                             at org.hornetq.utils.OrderedExecutorFactory$OrderedExecutor$1.run(OrderedExecutorFactory.java:100) [hornetq-core-2.2.13.Final.jar:]
                                   at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) [rt.jar:1.7.0_03]
                             at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) [rt.jar:1.7.0_03]
                             at java.lang.Thread.run(Thread.java:722) [rt.jar:1.7.0_03]
                            Caused by: javax.transaction.RollbackException: ARJUNA016053: Could not commit transaction.
                             at com.arjuna.ats.internal.jta.transaction.arjunacore.TransactionImple.commitAndDisassociate(TransactionImple.java:1177) [jbossjts-4.16.2.Final.jar:]
                             at com.arjuna.ats.internal.jta.transaction.arjunacore.BaseTransaction.commit(BaseTransaction.java:117) [jbossjts-4.16.2.Final.jar:]
                             at com.arjuna.ats.jbossatx.BaseTransactionManagerDelegate.commit(BaseTransactionManagerDelegate.java:75)
                             at org.jboss.as.ejb3.inflow.MessageEndpointInvocationHandler.afterDelivery(MessageEndpointInvocationHandler.java:72)
                             ... 15 more
                            

                            My MDB header looks like this now:


                            @MessageDriven(name = "TestMDB", activationConfig = {
                                  @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Topic"),
                                  @ActivationConfigProperty(propertyName = "destination", propertyValue = "abc.pl.testing"),
                                  @ActivationConfigProperty(propertyName = "setupAttempts", propertyValue = "-1"),
                                  @ActivationConfigProperty(propertyName = "hA", propertyValue = "true"),
                                  @ActivationConfigProperty(propertyName = "clientID", propertyValue = "abc.pl.testing"),
                                  @ActivationConfigProperty(propertyName = "subscriptionName", propertyValue = "abc.pl.testing"),
                                  @ActivationConfigProperty(propertyName = "subscriptionDurability", propertyValue = "Durable"),
                                  @ActivationConfigProperty(propertyName = "connectorClassName", propertyValue = "org.hornetq.core.remoting.impl.netty.NettyConnectorFactory,org.hornetq.core.remoting.impl.netty.NettyConnectorFactory"),
                                  @ActivationConfigProperty(propertyName = "connectionParameters", propertyValue = "host=1.1.1.1;port=5445,host=2.2.2.2;port=5445") })
                            public class TestMDB implements MessageListener
                            

                             

                            Moreover, from the JBoss code snippet, that you attached on the community page about :


                            if (activation.isTopic() && spec.isSubscriptionDurable())
                            
                            
                            
                            
                            { // our mdb is not durable so we will hit the else clause
                            
                            
                            // ....
                            } else {
                            SimpleString queueName;
                             if (activation.isTopic())
                             {
                             if (activation.getTopicTemporaryQueue() == null)
                             {
                             queueName = new SimpleString(UUID.randomUUID().toString());
                             session.createQueue(activation.getAddress(), queueName, selectorString, false);
                             activation.setTopicTemporaryQueue(queueName);
                             }
                             else
                             {
                             queueName = activation.getTopicTemporaryQueue();
                             }
                             }
                             else
                             {
                             queueName = activation.getAddress();
                             }
                             consumer = session.createConsumer(queueName, selectorString);
                            

                            is this work around valid only for JMS topics and not for JMS queues?

                             

                            Did you make any other change other than the MDB headers to make it work?

                            • 11. Re: HQ 2.2.13 - Cannot seem to set up MDB with failover using hard coded list of hosts
                              ndipiazza

                              Himanshu - No I did not make any other changes to the MDB headers.

                               

                              Send the entire log and standalone-full-ha.xml from both the server and the client to nicholas.dipiazza at gmail and I will take a look.

                               

                              Also make sure your JBoss client is using standalone-full.xml and not standalone-full-ha.xml.

                               

                              And jms queues should NOT be affected by this issue. This is for topics only.

                              • 12. Re: HQ 2.2.13 - Cannot seem to set up MDB with failover using hard coded list of hosts
                                jbertram

                                Coming back to this issue after a long break...

                                 

                                Do you have a way I can reproduce this now on my own machine?

                                • 13. Re: HQ 2.2.13 - Cannot seem to set up MDB with failover using hard coded list of hosts
                                  ndipiazza

                                  Justin, please see the response from me that is labelled "Correct answer" for why I wasn't able to listen to the Topic from my MDB with a hard-coded list of providers.

                                   

                                  This particular part has been resolved. The remaining issue is in another topic here: https://community.jboss.org/thread/205483

                                  • 14. Re: HQ 2.2.13 - Cannot seem to set up MDB with failover using hard coded list of hosts
                                    jbertram

                                    I read that earlier and my understanding was you had simply found a work-around and not a real solution.  I based this on the fact that you said, "I found a work-around we can use."  Therefore I concluded that while you were able to continue your development you didn't actually have a solution for the root issue.  Is that not accurate?

                                     

                                    Also, you mentioned that you were able to reproduce the issue on a single box, and I was hoping you could explain enough to help me do the same.

                                    1 2 Previous Next