1 2 Previous Next 19 Replies Latest reply on Mar 27, 2014 4:04 PM by kwintesencja

    Feedback for Arquillian-JBehave extension


      Hi Guys,

          I've been working on an Arquillian-Jbehave extension to enable BDD tests to run in-container, and it's more or less ready for an Alpha1 release. But before that, I'd like some feedback on a few areas of concern:


      Extraction of Resources required for JBehave reports

          Currently, I have an Observer (ViewResourcesUnpacker) that unpacks resource artifacts hosted at Maven Central, into the 'target/jbehave/view' directory. This more or less performs the same task as the UnpackViewResources mojo of the jbehave-maven plugin. The observer was written to prevent the apperance that the extension is "broken" (since the reports would be displayed without the stylesheets and other resources), and also to do away with documentation on the need to run the jbehave-maven-plugin:unpack-view-resource goal. However, after accounting for CI servers, and Arquillian users that use Ant/Ivy/Gradle etc., I'm reconsidering this decision.  What do you think?


      Dependency Injection

        Dependency injection of Steps instances is performed by the ArquillianInstanceStepsFactory, using TestEnrichers that have been stored in a ThreadLocal property of the StepEnricherProvider class. This contraption is kind of necessary given that Arquillian performs injection into Test instances, and not Step instances. To add to the complexity, JBehave creates new Threads using an ExecutorService, to run the Scenarios and associated Steps; note that JBehave drives the ArquillianInstanceStepsFactory to create the necessary Steps for a scenario so any dependency injection performed by this StepsFactory occurs in the same thread and context as the one created by JBehave. It is possible to configure the ExecutorService used by JBehave, and the examples demonstrate this (see the constructor), by using the Guava SameThreadExecutor to retain the original Arquillian contexts in the same thread as used by JBehave.


        It is quite apparent that we need to package and use Guava (or any other library that provides a similar ExecutorService that doesn't spawn off new threads). Therefore, my questions are - Is there any such "lightweight" replacement for Guava? And should we roll out our own ExecutorService that does something similar?


        Also, should we decide on an ExecutorService for the users of this extension (by creating a class ArquillianJUnitStory) ? Or, do we allow users to configure it like in the example?


      Future plans:


      LifeCycle of Drone injected WebDriver/Selenium instances

        I've plans to "borrow" the concepts supported by PerStoryWebDriverSteps/PerStoriesWebDriverSteps/PerScenarioWebDriverSteps classes of JBehave-Web, to allow for WebDriver/Selenium instances to be valid for a certain scope (like a single step, multiple steps, single scenario, or multple scenarios), instead of the entire test execution. This may not appear in the Alpha1 release.


      PS: In case you'd like to read the notes, they're here in a Gist.

        • 1. Re: Feedback for Arquillian-JBehave extension

          An extract from feedback received from Aaron Walker (author of JBehave CDI/Weld support) earlier in the week:


          I've taken a quick look at your fork https://github.com/VineetReynolds/arquillian-testrunner-jbehave and it looks like you've made great progress.


          A couple of things I was thinking about  trying to support with this type of integration is I really don't like having to add the Step classes manually. The CDI/Weld integration I wrote allows you to annotate a class with @WeldStep and it gets added automatically as a candidate step class. Another thing but not really a big deal is allowing the JBehave Configuration to be injected.


          Considering this, I might investigate plans to implement the AnnotatedEmbedderRunner/AnnotatedPathRunner type in the Arquillian integration in some manner for the Alpha1 release.

          • 2. Re: Feedback for Arquillian-JBehave extension

            Hi Vineet,


            first, let me say I really like to see another approach how user can define tests in Arquillian.


            I was looking at Drone hack you have introduced in Alpha1 code base. I'm convinced that in cannot be fixed easily, basically TestEnricher in Drone is not prepared to be executed in "standalone" mode and Drone unfortunately does not export enough SPI to let you fire events yourself in the JBehave testrunner. See https://issues.jboss.org/browse/ARQ-754 for further details.


            I think that a proper way how we should handle this is get Arquillian Drone 1.0.0.Final and do SPI updates for upcoming 1.1.0. What do you think, does this approach work for you?





            • 3. Re: Feedback for Arquillian-JBehave extension

              Hi Karel,

                  I understand your reasoning to bring it in Drone 1.1. I'm quite fine with it, especially since it is a generic way to ensure that Drone instances have a lifecycle shorter than the test itself. Also, with ARQ-754 the lifecycle may also be controlled by other extensions (not just JBehave). For now, I'll investigate the use of jbehave-web, or argument-injection (@Drone drone... as a test method arg) for providing Drone instances to the steps. Lifecycle control would be tracked as a separate issue.



              • 4. Re: Feedback for Arquillian-JBehave extension

                Hi Vineet,


                I stumbled across your project while I was working to get JBehave to play nice with Arquillian, but I've run into a bit of a problem! The reports generate as expected with a managed container, but with a remote container (JBoss AS 7.1.1) the reports will be written to the server's install folder under bin/target/jbehave. Otherwise, the tests run perfectly with the enrichers providing all the extra stuff (save for Drone, I haven't tried that yet!)


                I've been doing a fair amount of digging to try and figure out how test results are returned to the client from the remote container, but haven't had any luck. I would imagine a custom report generator for JBehave may work as I haven't found anything so far that would indicate passing of reports from the server to the client really occur. Any direction to the proper resources or documentation would be fantastic.




                • 5. Re: Feedback for Arquillian-JBehave extension

                  I want this functionality will be ready soon.


                  I believe it is very important and will help a lot


                  When will the final version be ready?

                  • 6. Re: Feedback for Arquillian-JBehave extension

                    I would suggest another extension.
                    It would be the junction of Arquillian, JBehave and Selenium, what do you think?

                    Another suggestion is to join Arquillian, and JBehave JSFUnit.

                    • 7. Re: Feedback for Arquillian-JBehave extension

                      Hi Clayton,


                      is there a reason why you cannot combinite Arquillian JBehave + Arquillian Drone, resp. Arquillian JBehave + Arquillian Warp in order to have the junctions you defined above? Will a single, highly specialized extension, provide you some extra functionality?

                      • 8. Re: Feedback for Arquillian-JBehave extension

                        Hi Logan,

                           Thank for you taking the time out to test drive this.


                           You've asked a good question. I encountered a similar problem while writing the examples. Apparently, I dislike the manner in which JBehave currently prepares the reports, since it tends to cause problems like the one you've described. From what I remember, the reports are generated in the test phase itself of the build, and not in a separate reporting phase. Additionally, I'd like to point out that JBehave reports are generated in the directory relative to the path where stories are located. The StoryReporterBuilder could be provided with a codeLocation URL which is eventually used to determine the location that would be used to writing the reports. By default, the codeLocation happens to be a relative path of "target/classes" (this is hardcoded somewhere in JBehave-Core), and a child directory named "jbehave" is created under the target directory; reports are written to this jbehave directory. The path is relative to the current working directory of the Java process, which is why you're seeing the jbehave directory created as bin/target/jbehave in the AS home.


                          Now, the way I see it, for now you could configure the codeLocation to a URL where you'd like the reports to be written to. Something like the below would result in reports being written to C:\tmp\jbehave:


                           public Configuration configuration()
                            URL codeLocation = null;
                            try {
                                codeLocation = new File("C:\\tmp\\jbehave\\").toURI().toURL();
                            } catch (MalformedURLException e) {
                                throw new RuntimeException(e);
                            Configuration configuration = new MostUsefulConfiguration()
                                    .useStoryPathResolver(new UnderscoredCamelCaseResolver())
                                    .useStoryReporterBuilder(new StoryReporterBuilder()
                                          .withFormats(CONSOLE, TXT, HTML, XML)
                              return configuration;


                           I'll consider incorporating a feature to do this seamlessly in some manner, to allow for portable tests across managed, embedded and remote Arquillian container adapters, but if it holds up the first Alpha release, then I'll have to push it to a later one.



                        • 9. Re: Feedback for Arquillian-JBehave extension



                          I went ahead and tried out something similar to that, were the results are written into a directory structure following the fully-qualified class name of the story being run. It works well enough that I'm not getting false failures between tests (and projects!) so I'm able to get by. Luckily, Arquillian will at least report failures, but pending scenarios will falsely report success.


                          Here's what my code looks like now:



                          @RunWith( Arquillian.class )

                          public abstract class ArquillianStory extends JUnitStory


                              // ...




                              public Configuration configuration()


                                  Configuration configuration = super.configuration();


                                  configuration.useStoryControls( new StoryControls()

                                          .doDryRun( false )

                                          .doSkipScenariosAfterFailure( false ) )

                                      .useStoryLoader( new LoadFromClasspath() )

                                      .useStoryPathResolver( new CasePreservingResolver() )

                                      .useStoryReporterBuilder( new StoryReporterBuilder()


                                          .withCodeLocation( this.getCodeLocation() ) // <-- #getCodeLocation() can be overridden to whatever you like

                                          .withFormats( CONSOLE, HTML, TXT, XML )

                                          .withFailureTrace( true ) );


                                  return configuration;


                              } // configuration



                              protected URL getCodeLocation()




                                      return new File( "./target/jbehave/" + this.getClass().getName().replace( '.', '/' ) ).toURI().toURL();


                                  catch( MalformedURLException exception )


                                      throw new RuntimeException( exception );



                              } // getCodeLocation



                              // ...



                          } // class ArquillianStory


                          • 10. Re: Feedback for Arquillian-JBehave extension

                            The reason why Arquillian does not flag pending scenarios as failures, is because of how JBehave treats pending steps - they do not fail by default. I believe that if you follow the information provided here - http://jbehave.org/reference/stable/pending-steps.html you should be able to flag pending scenarios as failures, on using the FailingUponPendingStep class as the PendingStepStrategy.

                            • 11. Re: Feedback for Arquillian-JBehave extension

                              7. May 17, 2012 5:37 AM (in response to Clayton Passos)

                                Re: Feedback for Arquillian-JBehave extension     

                              Hi Clayton,


                              is there a reason why you cannot combinite Arquillian JBehave + Arquillian Drone, resp. Arquillian JBehave + Arquillian Warp in order to have the junctions you defined above? Will a single, highly specialized extension, provide you some extra functionality?



                              I do not know how to do this integration. So I assumed it was not possible without a hack code.
                              Could make a sample code, please?

                              • 12. Re: Feedback for Arquillian-JBehave extension

                                Thanks, Vineet, I completely missed that!


                                I've also found a couple other goofs...


                                You can't really merge all the JBehave dependencies down into a single archive to be further merged in with the deployment if you include beans.xml. I've been playing with a couple utilities leveraging CDI, and if you put beans.xml in that merged archive, your test cases will blow clear the hell up. What I did as a workaround was implement the ApplicationArchiveProcessor interface and instead add all the dependencies to the test deployment as libraries. This unfortunately means only EnterpriseArchive and WebArchive deployements are possible, but the majority of [at least my] test deployments will be exactly these, so this becomes a bit of a non-issue [for me]. Granted, this was because of some internal utilities I was adding and probably not something you need to worry about yourself.


                                There's also the commons-io dependency that JBehave declares. I personally prefer the >= 2.0 versions of commons-io, so when I started testing deployments using commons-io, I got a pile of class-not-found and method-not-found errors. I added an exclusion for commons-io to the maven dependency resolver that appends the supplementary archives and added a dependency on commons-io >= 2.0 to pull the latest version. JBehave looks like it plays nicely with the >=2.0 versions (it was pulling 2.3 for what I was trying out.) I haven't figured out a way to make inclusion or exclusion of commons-io configurable from a test case, so in the mean time I'm just keeping the current set-up.

                                • 13. Re: Feedback for Arquillian-JBehave extension

                                  I'm having a slightly hard time grasping what you've said about CDI. Can I look at a testcase where this is a problem?


                                  On the topic of overriding commons-io and other dependencies pulled in by JBehave, I'll take a look at whether it is possible to do so using the ShrinkWrap 2.0 Resolvers.

                                  • 14. Re: Feedback for Arquillian-JBehave extension

                                    The issue with CDI was caused by some fiddlings-around I was doing myself and likely not something you need to worry about. If you want to see the problems I was experiencing, though, you can add an empty beans.xml asset to the auxilliary archive and it'll barf all over itself.


                                    I'm putting together the modifications I made in a fork of your project on github and I'll let you know when it's up.

                                    1 2 Previous Next