2 Replies Latest reply on Aug 29, 2016 12:14 PM by wwang2016

    Error During Load Test Against A modcluster Configuration

    wwang2016

      Hi

       

      I set up wildfly10 with Apache (modcluster v1.3.1). Currently, the standalone-full-ha.xml only has the default configuration although I had to made some change on the default httpd.conf in order to make them to work together.

       

      I was able to do many rounds of load test through the Apache web server that dispatches requests to wildfly10 instance. Every test was basically sending 20 thousands of requests and I did many time in a day. I also tried to send 100K requests to the Apache web server. The web server and wildfly instance did not show any error although only a small portion (20%) of requests actually got sent successfully (OK status) with JMeter.  Then I tried the test second day, I found the following issues. I am not sure if I need to tune the configuration on Apache side or Wildfly side.

       

      Note: One thing I am confident is that the wildfly instance is very performing since I can load test directly against the wildfly server without error. I also found out that if I restart the Apache web server, the problem will disappear. But then I would expect the problem to happen next day, definitely a problem that needs to be resolved.

       

      By monitoring the "open files" with lsof command, I can see that the number of open files will grow to a point (>3379) where wildfly instance will report server error if I send requests through web server. However, If I send requests directly to wildfly instance, the number of open file will stay at round 224, and never has this issue.

       

       

      (1) with a load of 20,000 requests the second day, web server reported error although wildfly instance did not show error in the console.

      I was able to do a small load test (500 requests) later without problem

       

      [Fri Aug 26 09:11:51.546079 2016] [proxy_ajp:error] [pid 7495:tid 140560832644864] [client 10.25.25.104:53018] AH01014: ajp_handle_cping_cpong: awaited CPONG, received 5

      [Fri Aug 26 09:11:51.546105 2016] [proxy_ajp:error] [pid 7495:tid 140560832644864] (20014)Internal error: [client 10.25.25.104:53018] AH00897: cping/cpong failed to 10.25.25.112:8009 (10.25.25.112)

      [Fri Aug 26 09:11:52.726262 2016] [proxy_ajp:error] [pid 7495:tid 140560849430272] [client 10.25.25.104:53342] AH00893: dialog to 10.25.25.112:8009 (10.25.25.112) failed

      [Fri Aug 26 09:11:52.726351 2016] [proxy_ajp:error] [pid 7495:tid 140560973809408] (9)Bad file descriptor: AH01030: ajp_ilink_receive() can't receive header

      [Fri Aug 26 09:11:52.726370 2016] [proxy_ajp:error] [pid 7495:tid 140560973809408] [client 10.25.25.104:53343] AH01012: ajp_handle_cping_cpong: ajp_ilink_receive failed

      [Fri Aug 26 09:11:52.726376 2016] [proxy_ajp:error] [pid 7495:tid 140560973809408] (120006)APR does not understand this error code: [client 10.25.25.104:53343] AH00897: cping/cpong failed to 10.25.25.112:8009 (10.25.25.112)

      [Fri Aug 26 09:11:52.727091 2016] [:error] [pid 7249:tid 140560757110528] proxy: CLUSTER: (balancer://mycluster). All workers are in error state

      [Fri Aug 26 09:11:52.730723 2016] [:error] [pid 7281:tid 140560706754304] proxy: CLUSTER: (balancer://mycluster). All workers are in error state

      ...

       

       

      (2) when I tested a couple of times (20,000 requests), the wildfly instance started to show error, and the application actually was down

       

      2016-08-26 09:35:45,606 ERROR [org.jboss.as.server.deployment.scanner] (DeploymentScanner-threads - 1) WFLYDS0012: Scan of /home/wayne/wildfly-10.0.0.Final/standalone/deployments threw Exception: java.lang.RuntimeException: WFLYDS0032: Failed to list files in directory /home/wayne/wildfly-10.0.0.Final/standalone/deployments. Check that the contents of the directory are readable.

        at org.jboss.as.server.deployment.scanner.FileSystemDeploymentService.listDirectoryChildren(FileSystemDeploymentService.java:1287)

        at org.jboss.as.server.deployment.scanner.FileSystemDeploymentService.scanDirectory(FileSystemDeploymentService.java:781)

        at org.jboss.as.server.deployment.scanner.FileSystemDeploymentService.scan(FileSystemDeploymentService.java:570)

        at org.jboss.as.server.deployment.scanner.FileSystemDeploymentService.scan(FileSystemDeploymentService.java:489)

        at org.jboss.as.server.deployment.scanner.FileSystemDeploymentService$DeploymentScanRunnable.run(FileSystemDeploymentService.java:250)

        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)

        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)

        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)

        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)

        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

        at java.lang.Thread.run(Thread.java:745)

        at org.jboss.threads.JBossThread.run(JBossThread.java:320)

       

       

      2016-08-26 09:35:45,804 WARN  [org.apache.activemq.artemis.core.server] (Thread-28 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$2@4503131b-761871180)) AMQ222149: Message Reference[340971]:RELIABLE:ServerMessage[messageID=340971,durable=true,userID=f4ef6c12-6b91-11e6-883d-4fef73126b67,priority=4, bodySize=1915, timestamp=Fri Aug 26 09:35:30 EDT 2016,expiration=0, durable=true, address=jms.queue.jumptxt-media.InboundSMS,properties=TypedProperties[IMPACT_X_MESSAGE_DELIVERY_COUNT=1,IMPACT_X_MESSAGE_EXPIRY=1472304930514,__AMQ_CID=f4eb9b7d-6b91-11e6-883d-4fef73126b67,ImpactMessageDelay=10000,_AMQ_SCHED_DELIVERY=1472218540514,JMSXDeliveryCount=0]]@1233483289 has reached maximum delivery attempts, sending it to Dead Letter Address jms.queue.jumptxt-media.InboundMobileMessageDLQ from jms.queue.jumptxt-media.InboundSMS

      2016-08-26 09:35:45,805 WARN  [org.apache.activemq.artemis.core.server] (Thread-28 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$2@4503131b-761871180)) AMQ222149: Message Reference[342236]:RELIABLE:ServerMessage[messageID=342236,durable=true,userID=f62af952-6b91-11e6-883d-4fef73126b67,priority=4, bodySize=1915, timestamp=Fri Aug 26 09:35:32 EDT 2016,expiration=0, durable=true, address=jms.queue.jumptxt-media.InboundSMS,properties=TypedProperties[IMPACT_X_MESSAGE_DELIVERY_COUNT=2,IMPACT_X_MESSAGE_EXPIRY=1472304921349,__AMQ_CID=f62aab2f-6b91-11e6-883d-4fef73126b67,ImpactMessageDelay=10800,_AMQ_SCHED_DELIVERY=1472218543382,JMSXDeliveryCount=0]]@292950365 has reached maximum delivery attempts, sending it to Dead Letter Address jms.queue.jumptxt-media.InboundMobileMessageDLQ from jms.queue.jumptxt-media.InboundSMS

      2016-08-26 09:35:45,993 WARN  [com.arjuna.ats.arjuna] (Thread-9 (ActiveMQ-client-global-threads-2041603878)) ARJUNA012073: BasicAction.End() - prepare phase of action-id 0:ffff7f000001:-1d3e0b99:57c03f67:f282 failed.

      2016-08-26 09:35:45,993 WARN  [com.arjuna.ats.arjuna] (Thread-9 (ActiveMQ-client-global-threads-2041603878)) ARJUNA012075: Action Aborting

      2016-08-26 09:35:46,036 WARN  [org.apache.activemq.artemis.core.client] (Thread-9 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$2@4503131b-761871180)) AMQ212007: connector.create or connectorFactory.createConnector should never throw an exception, implementation is badly behaved, but we will deal with it anyway.: io.netty.channel.ChannelException: Unable to create Channel from class class io.netty.channel.socket.nio.NioSocketChannel

        at io.netty.bootstrap.AbstractBootstrap$BootstrapChannelFactory.newChannel(AbstractBootstrap.java:455)

        at io.netty.bootstrap.AbstractBootstrap.initAndRegister(AbstractBootstrap.java:306)

        at io.netty.bootstrap.Bootstrap.doConnect(Bootstrap.java:134)

        at io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:116)

        at org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnector.createConnection(NettyConnector.java:601)

        at org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.openTransportConnection(ClientSessionFactoryImpl.java:990)

        at org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.createTransportConnection(ClientSessionFactoryImpl.java:1034)

        at org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.establishNewConnection(ClientSessionFactoryImpl.java:1213)

        at org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.getConnection(ClientSessionFactoryImpl.java:851)

        at org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.getConnectionWithRetry(ClientSessionFactoryImpl.java:753)

        at org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.connect(ClientSessionFactoryImpl.java:239)

        at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:759)

        at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.connect(ServerLocatorImpl.java:616)

        at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.connect(ServerLocatorImpl.java:597)

        at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl$5.run(ServerLocatorImpl.java:1479)

        at org.apache.activemq.artemis.utils.OrderedExecutorFactory$OrderedExecutor$ExecutorTask.run(OrderedExecutorFactory.java:100)

        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

        at java.lang.Thread.run(Thread.java:745)

      Caused by: io.netty.channel.ChannelException: Failed to open a socket.

        at io.netty.channel.socket.nio.NioSocketChannel.newSocket(NioSocketChannel.java:62)

        at io.netty.channel.socket.nio.NioSocketChannel.<init>(NioSocketChannel.java:72)

        at sun.reflect.GeneratedConstructorAccessor44.newInstance(Unknown Source)

        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)

        at java.lang.Class.newInstance(Class.java:442)

        at io.netty.bootstrap.AbstractBootstrap$BootstrapChannelFactory.newChannel(AbstractBootstrap.java:453)

        ... 18 more

      Caused by: java.net.SocketException: Too many open files

        at sun.nio.ch.Net.socket0(Native Method)

        at sun.nio.ch.Net.socket(Net.java:411)

        at sun.nio.ch.Net.socket(Net.java:404)

        at sun.nio.ch.SocketChannelImpl.<init>(SocketChannelImpl.java:105)

        at sun.nio.ch.SelectorProviderImpl.openSocketChannel(SelectorProviderImpl.java:60)

        at io.netty.channel.socket.nio.NioSocketChannel.newSocket(NioSocketChannel.java:60)

        ... 24 more

        • 1. Re: Error During Load Test Against A modcluster Configuration
          rhusar

          What OS is this? What is the concurrency level/how many clients are sending requests concurrently? What is the output of ulimit -a ?

          • 2. Re: Error During Load Test Against A modcluster Configuration
            wwang2016

            Hi Radoslav,

             

            It is on CentOS7, and I sent the requests through JMeter with

            20 concurrent users,

            ramp-up time=0,

            500 rounds.

             

            This is a load test through Apache web server + Wildfly App server. It happened the second day after I restarted web server. If I restart web server and load test it, it will not happen until the second day.

             

            The following is the output of ulimit -a

             

            core file size          (blocks, -c) 0

            data seg size           (kbytes, -d) unlimited

            scheduling priority             (-e) 0

            file size               (blocks, -f) unlimited

            pending signals                 (-i) 30547

            max locked memory       (kbytes, -l) 64

            max memory size         (kbytes, -m) unlimited

            open files                      (-n) 1024

            pipe size            (512 bytes, -p) 8

            POSIX message queues     (bytes, -q) 819200

            real-time priority              (-r) 0

            stack size              (kbytes, -s) 8192

            cpu time               (seconds, -t) unlimited

            max user processes              (-u) 4096

            virtual memory          (kbytes, -v) unlimited

            file locks                      (-x) unlimited