2 Replies Latest reply on Jul 20, 2012 2:03 AM by Daniel Bevenius

    SwitchYardTestKit enhancements

    Daniel Bevenius Master

      Hi!

       

      In quite a few of the tests in SwitchYard, a MockHandler is used as a substitute for the real service. For example, a test might look something like this:

       

      String payload = "dummy payload";
      MockHandler greetingService = _testKit.registerInOnlyService("GreetingService");
      

      This works well in cases where you don't really need to test the service itself. But other times you might want to verify the results of the real service and not a mock. This question was asked in #switchyard and is what triggered this post.

       

      The suggestion here is to enhance the SwitchYardTestKit with the requirements gathered in this post.

      To start things off,  how about something like this for a test that wants to verify the results of invoking a service:

       

      @Test
       public void sendTextMessageToJMSQueue() throws Exception {
           final String payload = "dummy payload";
           sendTextToQueue(payload, QUEUE_NAME);
      
           final String content = _testKit.waitForCompletion(String.class);
           assertThat(content, equalTo("Greeted " + payload));
       }
      

      The main point of interest is the highlighted call to waitForCompletion. This method takes the expected type of the Message content and will take care of waiting until the exchange has completed, and then extract the contents from the Message.

      You could also specify that you want the complete Exchange in cases where you need to verify properties and things like that.

      Also, perhaps we should have overloaded methods that enable you to specify a timeout.

       

      Any suggestions or comments are welcome and I'll create a jira later when we have agreed on the requirements.

       

      Regards,

       

      /Dan

        • 1. Re: SwitchYardTestKit enhancements
          Keith Babo Master

          Hey Dan,

           

          First, I think mocks replacing real services is definitely something that we can do a better job of in general.  This is a common requirement and we should definitely improve the support we have now (e.g. specify the message that's returned instead of just forwarding in to out, etc.).

           

          Second, there's the case of invoking a real service and then observing some outcome.  What's not crystal clear to me right now is (a) where in the invocation path should observation take place, (b) what is it that you want to observe.  I'm assuming that the outcome of observation is a result/event that you then assert against to verify that a given state has been reached.  For services that are not mocked, this can break down into three cases:

           

          (1) The service is in-only and returns nothing.

          (2) The service is in-out and returns an output message.

          (3) The service is in-out and returns a fault.

           

          (2) and (3) are easy enough to test against by simply asserting against the returned message content.  Is there something else missing to handle this case?

           

          (1) is tough.  We can provide a hook to get the message counter for the service, which will let you know that the service has in fact been invoked.  We can also let you peak at the message before and/or after the invocation, but this has a narrower application in terms of testing, IMO.  For instance, what if the service doesn't change the content of the message at all?  In that case, looking at the message after the invocation doesn't get you much. 

           

          I think we need to take multiple approaches for (1).  First, it should be simple enough to add a hook to catch the message pre/post invocation within the test kit.  For the cases where that's useful to the user in testing, great.  Separately, we need an easy way to hook into the service implementation to observe what happened inside the implementation.  For camel services, this could be a hook to get the camel context with routing details, stats, etc.  For BPMN 2, maybe this is a hook to get the knowledge session and context to see if a given variable is set to a given value.  My point here is that this is stuff that's very specific to an implementation type and we'll have to come up with stuff that's catered to each type.  The MixIn stuff seems like a good entry point for this type of functionality which would keep implementation-specific details out of the generic test kit.

          • 2. Re: SwitchYardTestKit enhancements
            Daniel Bevenius Master

            Hey Keith,

             

            thanks for the feedback on this!

             

            First, I think mocks replacing real services is definitely something that we can do a better job of in general.  This is a common requirement and we should definitely improve the support we have now (e.g. specify the message that's returned instead of just forwarding in to out, etc.).

            I'Il create a separate jira for the handling of mocks.

            (2) and (3) are easy enough to test against by simply asserting against the returned message content.  Is there something else missing to handle this case?

            Using the Invoker this can be done today and I don't think there is anything missing in that area.

            I think we need to take multiple approaches for (1).  First, it should be simple enough to add a hook to catch the message pre/post invocation within the test kit.

            I'll take a look at this and see how that works.

            For camel services, this could be a hook to get the camel context with routing details, stats, etc.  For BPMN 2, maybe this is a hook to get the knowledge session and context to see if a given variable is set to a given value.  My point here is that this is stuff that's very specific to an implementation type and we'll have to come up with stuff that's catered to each type.  The MixIn stuff seems like a good entry point for this type of functionality which would keep implementation-specific details out of the generic test kit.

            Yeah, that sounds like a good idea and the MixIns have access to SwitchYardTestKit instance to look up anything they need. At the moment there is no CamelMixIn, but I can add one that provides access to the CamelContext to start with, and we can see what else would be useful after that.