8 Replies Latest reply on Feb 25, 2015 11:21 AM by jbertram

    wildfly 8.1.0 hornet version

    joelcstewart

      In wildfly 8.1.0, the modules wildfly-8.1.0.Final\modules\system\layers\base\org\hornetq\main it tells me it is version hornet v 2.4.1final

       

      But I cannot find anywhere the 2.4.1 distribution of documentation.

       

      In anycase, I am unable to use configuration that works in 2.3.0F, that used in EAP 6.2.  The problem seems to be with netty configuration:

       

      <connectors>
        <connector name="netty">
        <factory-class>
        org.hornetq.core.remoting.impl.netty.NettyConnectorFactory
        </factory-class>
        <param key="port" value="5446"/>
        </connector>
      </connectors>

       

      Wildfly does not seem to understand this anymore.   I am not sure where I am supposed to read documentation about how to configure hornet q.  

      Messaging configuration - WildFly 8 - Project Documentation Editor

       

      says goto

       

      HornetQ - Documentation - JBoss Community

       

      which does not even have the 2.4.1 docs posted.

       

      Where exactly is the wildfly 8.1.0f documentation for setting up messaging with netty?

        • 1. Re: wildfly 8.1.0 hornet version
          jbertram

          What specifically are you trying to accomplish?  Do you simply want to configure a Netty connector to use port 5446?

          • 2. Re: Re: wildfly 8.1.0 hornet version
            joelstewart

            Ultimately, I want this eap 6.2  (hornet 2.3.0) configuration for hornetq :

             

            <connectors>

                <connector name="netty">

                    <factory-class>

                        org.hornetq.core.remoting.impl.netty.NettyConnectorFactory

                    </factory-class>

                    <param key="port" value="5446"/>

                </connector>

            </connectors>  

             

            equivalent in wildfly 8. 

            • 3. Re: Re: Re: wildfly 8.1.0 hornet version
              jbertram

              Add this to <connectors> in the messaging subystem:

               

              <netty-connector name="netty" socket-binding="messaging"/>
              

               

              Then add this to the socket-binding-group:

               

              <socket-binding name="messaging" port="5446"/>
              
              1 of 1 people found this helpful
              • 4. Re: Re: Re: wildfly 8.1.0 hornet version
                jbertram

                To be clear, this is different between HornetQ standalone (as well as earlier versions of JBoss AS) and Wildfly because Wildfly configuration recommendations disallow class names (e.g. org.hornetq.core.remoting.impl.netty.NettyConnectorFactory) since they aren't terribly user friendly.  Also, since Wildfly is an integrated environment where lots of different subsystems need to be managed in a coherent way any address/port configuration is done through the socket-binding-group.

                • 5. Re: Re: Re: Re: wildfly 8.1.0 hornet version
                  joelcstewart

                  For reference for others stumbling in, this is what is working for me, should probably remove the in-vm connector and replace with invm-connector per jboss docs

                       <connectors>
                                      <netty-connector name="netty" socket-binding="messaging"/>
                                      <connector name="in-vm">
                                          <factory-class>org.hornetq.core.remoting.impl.invm.InVMConnectorFactory</factory-class>
                                      </connector>
                                  </connectors>
                  
                  
                                  <acceptors>
                                      <netty-acceptor name="netty" socket-binding="messaging"/>
                                      <acceptor name="in-vm">
                                          <factory-class>org.hornetq.core.remoting.impl.invm.InVMAcceptorFactory</factory-class>
                                          <param key="server-id" value="0"/>
                                      </acceptor>
                                  </acceptors>
                  

                   


                  Turns out my problem may have been more about the DefaultJMSConnectionFactory now being required to deploy the app.  Is that right??  My app only refers to java:/JmsXA, so not sure why it failed to deploy without this jboss jndi entry...

                   

                    <pooled-connection-factory name="hornetq-ra">
                                          <transaction mode="xa"/>
                                          <connectors>
                                              <connector-ref connector-name="in-vm"/>
                                          </connectors>
                                          <entries>
                                              <entry name="java:/JmsXA"/>
                    <!-- Global JNDI entry used to provide a default JMS Connection factory to EE application -->               
                    <entry name="java:jboss/DefaultJMSConnectionFactory"/>
                                          </entries>
                                      </pooled-connection-factory>
                  

                   

                  Any reason why this factory JNDI name must be there? If the entry is required, how would one setup multiple XA pools?

                   

                  Thanks,

                  • 6. Re: Re: Re: Re: Re: wildfly 8.1.0 hornet version
                    jbertram

                    Java EE 7 requires that application servers supply a default JMS connection factory.  The name of the default JMS connection factory implementation is controlled by this in the "ee" subsystem:

                     

                    <default-bindings context-service="java:jboss/ee/concurrency/context/default" datasource="java:jboss/datasources/ExampleDS" jms-connection-factory="java:jboss/DefaultJMSConnectionFactory" managed-executor-service="java:jboss/ee/concurrency/executor/default" managed-scheduled-executor-service="java:jboss/ee/concurrency/scheduler/default" managed-thread-factory="java:jboss/ee/concurrency/factory/default"/>
                    

                     

                    If you remove <entry name="java:jboss/DefaultJMSConnectionFactory"/> from the default pooled-connection-factory then this EE 7 requirement will be unmet which is probably why you're seeing an error when you remove it.  You are free to define as many pooled-connection-factory elements you like, but one of them needs to be the default.

                     

                    In general, I'm curious why you need a custom netty-connector rather than simply using the default http-connector.

                    • 7. Re: Re: Re: Re: Re: Re: wildfly 8.1.0 hornet version
                      joelcstewart

                      I have to admit I have not read up on JMS 2.0 / EE 7.   My code base is still using 1.1 style:

                      1. inject a connectionFactory
                      2. inject topic / queue
                      3. write all that code to create session, create producer, etc

                      all that was unsatisfying in style, especially because of the meaningless connection.createSession(false, ...);   where false was not really false - because JCA enlists the producer into the current JTA context.

                       

                      I just looked at 2.0 and it looks a better programming API with messaging contexts:

                      @Inject @JMSConnectionFactory
                       private JMSContext context;
                      
                      @Resource(lookup = "jms/dataQueue") private Queue dataQueue;
                      
                      public void sendMessageJavaEE7(String body) {
                        context.send(dataQueue, body);
                      }
                      

                       

                      YEAH!  So much prettier.  Is it true I don't need any jndi name on the @JMSConnectionFactory ??

                       

                      You got to understand where I am.  I have a well tested EAP 6.2 app that I was asked to make run on wildfly.   So I am not terribly interested in re-engineering to use the new API  (as ugly as JMS 1.1 is.....).  

                       

                      It seems to me that out of the box Wildfly, once adding the messaging submodule, would create its own default JMS messaging - probably an in vm and that is it, or whatever the spec requires of it.  If I choose to override that, and say - nope - you need to use this XA persistent one as the default, then I can see this jndi entry as an option for me to change it to something else.  When thinking about DefaultDS, I see how thinking goes and I understand it.  I am fully capable of seeing the standalone config that uses DefaultDS and I can tell it to use a different datasource if I choose to.   It is just my starting point is a messaging:1.4 module, not a 2.0.   It should be entirely undersatndable why I ended up where I am.

                       

                      Great question, why use netty and not http?  Well, the biggest reason is because the existing clients use netty TCP.   There is a lot that goes on and huge history to it, but understand that what I had as requirements was to a client system that uses clustered JMS servers but not clustered JBoss servers.  But I also had a requirement for failover.   What I do know is that clients know right straight away via the JMSExceptionListener that a TCP connection fails.   Can HTTP give me that.  I do not know.  And frankly, right now, I don't care to know.  I have to limit my variables right now in the migration effort, and so I want it to be netty tcp.  good?

                      • 8. Re: Re: Re: Re: Re: Re: wildfly 8.1.0 hornet version
                        jbertram

                        You got to understand where I am.  I have a well tested EAP 6.2 app that I was asked to make run on wildfly.   So I am not terribly interested in re-engineering to use the new API  (as ugly as JMS 1.1 is.....).  

                        Just to be clear, I'm not asking you to re-engineer anything to use any new API.  All your JMS 1.1 code should work without any issue.

                         

                        It seems to me that out of the box Wildfly, once adding the messaging submodule, would create its own default JMS messaging - probably an in vm and that is it, or whatever the spec requires of it.  If I choose to override that, and say - nope - you need to use this XA persistent one as the default, then I can see this jndi entry as an option for me to change it to something else.  When thinking about DefaultDS, I see how thinking goes and I understand it.  I am fully capable of seeing the standalone config that uses DefaultDS and I can tell it to use a different datasource if I choose to.   It is just my starting point is a messaging:1.4 module, not a 2.0.   It should be entirely undersatndable why I ended up where I am.

                        I can't tell if you're describing a problem here or not.  Is something broken or incomprehensible about the Wildfly 8.1 JMS configuration?  If so, please clarify.

                         

                        Great question, why use netty and not http?  Well, the biggest reason is because the existing clients use netty TCP.   There is a lot that goes on and huge history to it, but understand that what I had as requirements was to a client system that uses clustered JMS servers but not clustered JBoss servers.  But I also had a requirement for failover.   What I do know is that clients know right straight away via the JMSExceptionListener that a TCP connection fails.   Can HTTP give me that.  I do not know.  And frankly, right now, I don't care to know.  I have to limit my variables right now in the migration effort, and so I want it to be netty tcp.  good?

                        I think you've misunderstood what functionality the http-connector provides.  In general, the Undertow web server essentially serves as a single point for all connections (which makes sense because it was designed to deal with a large number of concurrent connections in a non-blocking, high-performance way).  This mechanism allows clients of all sorts to connect to the server via a single port (e.g. 8080).  Once the initial connection is made the HTTP upgrade header is leveraged to basically turn the HTTP connection into a different kind of connection appropriate for whatever component is ultimately handling the connection.  In your case, since HornetQ is handling the JMS connection the initial HTTP connection is turned into a Netty TCP connection just as if you the client had connected via Netty TCP in the first place.  You can rest assured that your ExceptionListener(s) will be triggered appropriately if the TCP connection fails even if you use the http-connector.