2 Replies Latest reply on Jul 8, 2019 3:10 PM by Laird Nelson

    What is bad about tunneling one scope through another?

    Laird Nelson Expert

      Suppose I have a producer method like this:

       

      @Produces

      @Dependent

      private Frob makeFrob(@Foo Frob fooFrob, @Bar Frob barFrob) {

         if (phaseOfMoonIsFull()) {

           return fooFrob;

         }

         return barFrob;

      }

       

      Let us further suppose that one way or another @Foo fooFrob is scoped with @ApplicationScoped and @Bar barFrob is scoped with @Dependent.

       

      If the (indirect) caller of this method receives fooFrob, is that bad in some way?  What problems exist when a pre-existing ApplicationScoped object gets returned by a Dependent-scoped producer method?

       

      Thinking about it, all I can come up with is that at some point the Dependent context will have its destroy() method invoked—which still seems harmless because that won't actually destroy the ApplicationScoped object, only the Dependent "copy".  Is there anything else that I am missing?

       

      Best,

      Laird

        • 1. Re: What is bad about tunneling one scope through another?
          Matěj Novotný Novice

          Hello

           

          With this kind of approach, user (indirectly) calling the producer cannot rely on what he truly gets.

          Even just masking one scope for another consistently (e.g. in a dependent scope producer *always* returning app scoped bean) is awkward yet I have seen it used several times for [dubiously] justifiable reasons.

          But here, the user can, based on a conditional statement not readable to them, sometimes get new object and sometimes get existing one with state. Depending on what information that bean has, this can be a problem.

          Furthermore, one kind has proxy, the other doesn't. This can present problems if they are somehow depending on that (or trying to unwrap the proxy for instance).

          Another thing from the top of my head - if one impl is serializable and the other isn't, user could experience weird behaviour should they rely on it anyhow.

           

          The destroy() method you mentioned shouldn't be harmful as for any custom destruction with producer, you actually need a disposer which would be under your control (on the same class where producer is) because spec mandates that.

           

          Glancing at the spec, it doesn't seem to be forbidden though having such implementation seems a bit fishy; maybe like hacking around some different problem?

          • 2. Re: What is bad about tunneling one scope through another?
            Laird Nelson Expert

            manovotn  wrote:

             

            Hello

             

            With this kind of approach, user (indirectly) calling the producer cannot rely on what he truly gets.

            Even just masking one scope for another consistently (e.g. in a dependent scope producer *always* returning app scoped bean) is awkward yet I have seen it used several times for [dubiously] justifiable reasons.

            But here, the user can, based on a conditional statement not readable to them, sometimes get new object and sometimes get existing one with state.

            Yes, exactly right.

            Depending on what information that bean has, this can be a problem.

            Agreed.

            Furthermore, one kind has proxy, the other doesn't. This can present problems if they are somehow depending on that (or trying to unwrap the proxy for instance).

            Agreed.

            Another thing from the top of my head - if one impl is serializable and the other isn't, user could experience weird behaviour should they rely on it anyhow.

            That's a good point.

            The destroy() method you mentioned shouldn't be harmful as for any custom destruction with producer, you actually need a disposer which would be under your control (on the same class where producer is) because spec mandates that.

             

            Glancing at the spec, it doesn't seem to be forbidden though having such implementation seems a bit fishy; maybe like hacking around some different problem?

            I think part of the issue is the age-old problem in the CDI specification that (a) a user isn't supposed to really care about what scope an object has (that's the whole point of separating that aspect out into context implementations) but (b) dependent objects create memory leak problems if the user does not take care in certain advanced situations to dispose of them properly, so there are always cases where the user has to care about the scope and the inner workings of how an object is produced.  A pity, but at least a known pity.

             

            In this particular use case I have in mind all sorts of undefined behavior would result if the user is relying on the object being scoped in a particular way, or if the user is relying on the object being proxied, etc.  In general, the more the user relies on implementation details of CDI the worse off she is going to be.

             

            What I am hearing here is: there isn't really any enormous problem using this kind of scope tunneling unless the user goes digging.  It may "smell funny", but it doesn't seem to be prohibited or called out as awful behavior.  Particularly in the case where a longer-lived scoped object is being "tunneled through" into Dependent scope, there don't seem to be severe issues.  Obviously going the other way—tunneling a dependent-scoped object through into ApplicationScoped-scope or something like that—would be more problematic.

             

            Thanks for your time as always,

            Best,

            Laird