5 Replies Latest reply on Feb 18, 2020 3:51 AM by vokail

    Batch subsystem in console


      I've found this old news about Wildfly 13: Batch Subsystem in WildFly 13 Admin Console


      In this news it's possible to see the following:



      In wildfly 17.0.1 I don't find runtime monitoring for jobs, in particular runtime->server->batch does not match with Wildfly 13 annoucement:



      I'm missing something? I have to enable / configurate HAL to display this information ?

        • 1. Re: Batch subsystem in console

          Do you mean there are jBeret (batch) jobs which are running on your WildFly setup which are no longer shown in the admin console? Maybe cfang might know about this?

          • 2. Re: Batch subsystem in console

            I'm not aware of any changes in displaying batch jobs in admin console.  I just tried it in current WildFly snapshot (19 beta 2 snapshot), and I got the same view as in WildFly 13.  I would expect WildFly 17 to behave the same (yes, just confirmed).


            • 3. Re: Batch subsystem in console

              Thanks for response.

              I think in my setup I miss the job column here:



              So I can elaborate my question like this: why I'm not able to see this colmumn ? Is something I have to enable using some kind of options? Or maybe it's hidden if there are no job running ?

              • 4. Re: Batch subsystem in console

                No need to configure anything. The jobs column should be there in all cases.  I've just tried WildFly 17.0.1.Final, and I was able to see the jobs column and batch runtime attributes on the right, with no batch application deployed. I'm using Firefox and Chrome.  You may want to try different browsers/newer versions, or adjust font sizes and see if any difference.


                1 of 1 people found this helpful
                • 5. Re: Batch subsystem in console

                  I think I have some kind of problems on my local setup.


                  As reference on aws instance, I can see the job tab:



                  but on localhost no:



                  same wildfly instance.


                  I have another question: could be that on localhost I have a lot of jobs (over 5k on job_execution table, on postgres ) and takes a lot of times to load/process them?

                  This would explain why on localhost also my application login is stopped and even status does not display nothing:




                  Maybe it's better if I open a new issue so devs can investigate on this ?


                  In wildfly logs, I can see also this, not sure if is related or not (first line is just reference, openend a Mongodb connection, then nothing for 12 minutes..):


                  09:34:30,073 INFO  [org.mongodb.driver.connection] (EJB default - 1) Opened connection [connectionId{localValue:3, serverValue:135}] to localhost:27017

                  09:46:50,548 WARN  [com.arjuna.ats.arjuna] (Transaction Reaper) ARJUNA012117: TransactionReaper::check timeout for TX 0:ffffc0a8a6e1:97f6705:5e4ba15a:2d in state  RUN

                  09:48:00,189 WARN  [com.arjuna.ats.arjuna] (Transaction Reaper Worker 0) ARJUNA012095: Abort of action id 0:ffffc0a8a6e1:97f6705:5e4ba15a:2d invoked while multiple threads active within it.

                  09:48:00,880 WARN  [com.arjuna.ats.arjuna] (Transaction Reaper) ARJUNA012117: TransactionReaper::check timeout for TX 0:ffffc0a8a6e1:97f6705:5e4ba15a:2d in state  CANCEL

                  09:49:39,425 WARN  [com.arjuna.ats.arjuna] (Transaction Reaper) ARJUNA012378: ReaperElement appears to be wedged: java.util.Hashtable.values(Hashtable.java:763)








                  09:49:44,551 WARN  [com.arjuna.ats.arjuna] (Transaction Reaper) ARJUNA012117: TransactionReaper::check timeout for TX 0:ffffc0a8a6e1:97f6705:5e4ba15a:2d in state  CANCEL_INTERRUPTED

                  09:50:31,761 WARN  [com.arjuna.ats.arjuna] (Transaction Reaper) ARJUNA012120: TransactionReaper::check worker Thread[Transaction Reaper Worker 0,5,main] not responding to interrupt when cancelling TX 0:ffffc0a8a6e1:97f6705:5e4ba15a:2d -- worker marked as zombie and TX scheduled for mark-as-rollback