1 2 3 4 Previous Next 53 Replies Latest reply on Oct 22, 2009 6:22 AM by rnicholson10

    Strange netty error when sending a lot of messages

    rnicholson10

      Ok, I'm back on the Beta5 release. We are seeing an unusual error with netty. At the time this error occurs we will have 10s of thousands of tcp connections from localhost to localhost all in a TIME_WAIT state (using "netstat -an" to see these).

      tcp 0 0 127.0.0.1:33727 127.0.0.1:5445 TIME_WAIT
      tcp 0 0 127.0.0.1:33721 127.0.0.1:5445 TIME_WAIT
      tcp 0 0 127.0.0.1:33725 127.0.0.1:5445 TIME_WAIT
      tcp 0 0 127.0.0.1:33724 127.0.0.1:5445 TIME_WAIT
      tcp 0 0 127.0.0.1:33722 127.0.0.1:5445 TIME_WAIT
      tcp 0 0 127.0.0.1:33728 127.0.0.1:5445 TIME_WAIT
      tcp 0 0 127.0.0.1:33726 127.0.0.1:5445 TIME_WAIT
      tcp 0 0 127.0.0.1:33723 127.0.0.1:5445 TIME_WAIT
      


      We are sending 65K messages one after the other. Each message is dependent on the last one so they must be sent in sequence. Messages are sent via a core bridge using a message group.

      What we think is happening is that netty cannot open another localhost connection as there are too many in the TIME_WAIT state.

      The first error we get is a JMS exception on the MDB which should be consuming the messages.

      2009-10-09 16:22:26,253 ERROR [STDERR] (pool-19-thread-1) Problem with creating connection or session.
      2009-10-09 16:22:26,253 DEBUG [org.jboss.ejb3.interceptors.aop.InterceptorSequencer] (Thread-340 (group:HornetQ-client-global-threads-21795015)) aroundInvoke [advisedMethod=public void com.paddypower.phase.engine.bean.mdb.EngineMDB.onMessage(javax.jms.Message), unadvisedMethod=public void com.paddypower.phase.engine.bean.mdb.EngineMDB.onMessage(javax.jms.Message), metadata=null, targetObject=com.paddypower.phase.engine.bean.mdb.EngineMDB@1c572d0, arguments=[Ljava.lang.Object;@534e5d]
      2009-10-09 16:22:26,253 ERROR [STDERR] (pool-19-thread-1) Messages processing will be not started. Fix queue problem and redeploy application.
      


      Shortly after we get a lot of errors like the following:

      2009-10-09 16:22:26,371 ERROR [STDERR] (pool-19-thread-15) Problem with creating connection or session.
      2009-10-09 16:22:26,384 ERROR [STDERR] (pool-19-thread-15) Messages processing will be not started. Fix queue problem and redeploy application.
      2009-10-09 16:22:26,384 INFO [STDOUT] (pool-19-thread-15) SentToHandler
      2009-10-09 16:22:26,385 SEVERE [org.hornetq.integration.transports.netty.NettyConnector] (pool-19-thread-9) Failed to create netty connection
      java.net.BindException: Cannot assign requested address
       at sun.nio.ch.Net.connect(Native Method)
       at sun.nio.ch.SocketChannelImpl.connect(Unknown Source)
       at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink.connect(NioClientSocketPipelineSink.java:145)
       at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink.eventSunk(NioClientSocketPipelineSink.java:109)
       at org.jboss.netty.channel.Channels.connect(Channels.java:762)
       at org.jboss.netty.channel.AbstractChannel.connect(AbstractChannel.java:195)
       at org.jboss.netty.bootstrap.ClientBootstrap$Connector.channelOpen(ClientBootstrap.java:287)
       at org.jboss.netty.channel.Channels.fireChannelOpen(Channels.java:197)
       at org.jboss.netty.channel.socket.nio.NioClientSocketChannel.<init>(NioClientSocketChannel.java:88)
       at org.jboss.netty.channel.socket.nio.NioClientSocketChannelFactory.newChannel(NioClientSocketChannelFactory.java:146)
       at org.jboss.netty.channel.socket.nio.NioClientSocketChannelFactory.newChannel(NioClientSocketChannelFactory.java:93)
       at org.jboss.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:235)
       at org.jboss.netty.bootstrap.ClientBootstrap.connect(ClientBootstrap.java:199)
       at org.hornetq.integration.transports.netty.NettyConnector.createConnection(NettyConnector.java:363)
       at org.hornetq.core.client.impl.ConnectionManagerImpl.getConnection(ConnectionManagerImpl.java:903)
       at org.hornetq.core.client.impl.ConnectionManagerImpl.getConnectionWithRetry(ConnectionManagerImpl.java:783)
       at org.hornetq.core.client.impl.ConnectionManagerImpl.createSession(ConnectionManagerImpl.java:280)
       at org.hornetq.core.client.impl.ClientSessionFactoryImpl.createSessionInternal(ClientSessionFactoryImpl.java:976)
       at org.hornetq.core.client.impl.ClientSessionFactoryImpl.createSession(ClientSessionFactoryImpl.java:721)
       at org.hornetq.jms.client.HornetQConnection.authorize(HornetQConnection.java:710)
       at org.hornetq.jms.client.HornetQConnectionFactory.createConnectionInternal(HornetQConnectionFactory.java:729)
       at org.hornetq.jms.client.HornetQConnectionFactory.createConnection(HornetQConnectionFactory.java:307)
       at org.hornetq.jms.client.HornetQConnectionFactory.createConnection(HornetQConnectionFactory.java:302)
       at com.paddypower.phase.engine.core.SingleDestMessageSenderTask.run(SingleDestMessageSenderTask.java:54)
       at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source)
       at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
       at java.lang.Thread.run(Unknown Source)
      
      


      Once more, any ideas?

        • 1. Re: Strange netty error when sending a lot of messages
          timfox

          Can you post a program that replicates the issue, and someone will investigate.

          The obvious question is: why are you creating so many connections?

          • 2. Re: Strange netty error when sending a lot of messages
            rnicholson10

            Forgot one piece of info. The error first occurs after sending 61755 of the messages.

            • 3. Re: Strange netty error when sending a lot of messages
              rnicholson10

              Can you explain how this works?

              Why are connections being opened from localhost to localhost?

              Is this the bridge taking messages and placing them on the local queue?

              I create a new connection each time I send a message. Should I think of using a pool of connections instead? I guess I would need to override my thread pool executor to create / destroy connections correctly on startup/shutdown. Is it ok to keep connections idle for long periods of time?

              • 4. Re: Strange netty error when sending a lot of messages
                clebert.suconic

                Are you using MDBs? and Maybe are you using the Netty Connector over MDBs?


                We found a connection leak on JCA that was fixed on Beta5.. maybe that's your issue?

                https://jira.jboss.org/jira/browse/HORNETQ-139

                • 5. Re: Strange netty error when sending a lot of messages
                  rnicholson10

                  How do I tell if the netty connector is being used for MDB's?

                  I would have thought that an in VM connector would be used for an MDB. Or is netty used for everything?

                  I guess my main question is: Why is HornetQ connecting to localhost on loopback? Surely any communication here can be done in the VM.

                  • 6. Re: Strange netty error when sending a lot of messages
                    clebert.suconic

                    ok, so probably there is a configuration error there, so your system is going through the loopback.

                    And also added the leak bug that was affecting you.


                    Maybe you could try the trunk?


                    I'm refreshing my memory in regard to the config on AS. (I've just deleted my AS5 trunk. I will update my SVN or maybe download it. I will get back to you right after I recreate my environment).

                    • 7. Re: Strange netty error when sending a lot of messages
                      rnicholson10

                      I never create a connection or session from within an MDB so I'm not sure that bug is affecting me. I'm going to implement a connection pool to make connection creation more efficient, although I don't know if this will help with the local loopback.

                      It would be great if you could help me with the configuration error, if I could avoid the local loopback I think my problem would disappear (but that just might be wishful thinking).

                      Thanks,

                      Ross

                      • 8. Re: Strange netty error when sending a lot of messages
                        clebert.suconic

                        The MDB is connected thorugh InVM by default.


                        I looked at the configs generated by the default installation.


                        There is this chapter on the documentation as well:

                        I believe that bug would also happen if you

                        http://hornetq.sourceforge.net/docs/hornetq-2.0.0.BETA5/user-manual/en/html/appserver-integration.html#d0e6806


                        You are probably not closing connections.


                        Or if you could reproduce this as Tim said, we could take a look.



                        • 9. Re: Strange netty error when sending a lot of messages
                          rnicholson10

                          I always close connections. I only use one class to send messages. Here is the code I use. If you see anything that is incorrect please let me know.

                          It still begs the question, why do I use thousands of loopback tcp connections if the InVM connector is being used. Or are all of these being created when I send a message (I don't know if this uses JCA)?

                           InitialContext initialContext = null;
                           ConnectionFactory connectionFactory = null;
                           Connection connection = null;
                           MessageProducer producer = null;
                           Queue queue = null;
                          
                           try
                           {
                           initialContext = new InitialContext();
                           connectionFactory = (ConnectionFactory) initialContext.lookup(CONNECTION_FACTORY);
                           queue = (Queue) initialContext.lookup(QUEUE);
                          
                          
                           connection = connectionFactory.createConnection();
                           Session session = connection.createSession(false,
                           Session.AUTO_ACKNOWLEDGE);
                           producer = session.createProducer(queue);
                           ObjectMessage message = session.createObjectMessage();
                          
                           message.setObject(packet);
                           message.setStringProperty(HornetQMessage.JMSXGROUPID, source.getName());
                          
                           producer.send(message, DeliveryMode.PERSISTENT, priority.getPriority(), 0);
                           connection.close();
                           }
                           catch (NamingException ne)
                           {
                           log.error("Could not send message from input sender at source: " + source.getId(), ne);
                           }
                           catch (JMSException jmse)
                           {
                           log.error("Could not send message from input sender at source: " + source.getId(), jmse);
                           jmsException = jmse;
                           }
                           finally
                           {
                           try
                           {
                           if (connection != null)
                           {
                           connection.close();
                           }
                          
                           if (initialContext != null)
                           {
                           initialContext.close();
                           }
                           }
                           catch (JMSException e)
                           {
                           // TODO Auto-generated catch block
                           e.printStackTrace();
                           }
                           catch (NamingException e)
                           {
                           // TODO Auto-generated catch block
                           e.printStackTrace();
                           }
                           }
                          


                          • 10. Re: Strange netty error when sending a lot of messages
                            clebert.suconic

                            I understood your client is a remove client. So it should use a remote connection.

                            But I honestly don't know why there are so many connections there. I would need some code replicating your issue.


                            The only known issue we have so far is the JCA connection leakage. So, maybe you should try trunk. (Notice that you will need to provide the ResourceAdapter name through an annotation on the EJB or deployment descriptor, as that has been changed).

                            • 11. Re: Strange netty error when sending a lot of messages
                              rnicholson10

                              I tried trunk the other day but had to submit a bug as bridges were not reconnecting on startup.

                              https://jira.jboss.org/jira/browse/HORNETQ-178

                              I will try a connection pool first in case it is a suitable workaround.

                              • 12. Re: Strange netty error when sending a lot of messages
                                rnicholson10

                                I have created a simple connection pool and it would seem to reduce the number of loopback connections drastically.

                                Another question if you wouldn't mind.

                                The pool I have created will grow as required overtime, only increasing when a connection is not available. On server shutdown all connections are closed (I have not added a reaping function to it yet, but may do soon).

                                All I'm pooling at the moment is connections but I could also pool a session and producer along with the connection, saving those been created as well for every send. Would this be wise or should I leave it as just connections?

                                I remember reading in the docs that reusing producers is a good idea as they are expensive to create.

                                • 13. Re: Strange netty error when sending a lot of messages
                                  timfox

                                  Yes, this sounds like the MDB leak that Andy fixed a couple of weeks back.

                                  There should be no need to do any pooling yourself, this will be done by the JCA layer

                                  • 14. Re: Strange netty error when sending a lot of messages
                                    rnicholson10

                                    Tim, I'm not using MDB's while sending. The connection pool I created is for when I want to send a message.

                                    Are you saying that the fix to the MDB leak would also help when sending messages (outside an MDB)? I am using JMS for this but I didn't think the MDB leak would apply here.

                                    Would it be possible to apply the leak fix to the Beta 5 release code? If I could check out this release and patch it then I should to be to see if this fix sorts out the issue I'm having. Unfortunately when I checked out TRUNK I had an issue whereby bridges would not reconnect after restart so I can't really use that. Unless of course someone can recommend a good time to checkout from trunk when they think it will all work!

                                    ;)

                                    R.

                                    1 2 3 4 Previous Next