I have been searching for answers on how to load object domain models to remote clients in the ejb3 paradigm, in both jboss and hibernate forums. I found many discussions, for example,
It seems lots of people have the same question/confusion as me, however, so far I have not found any satisfactory answer to the question.
Here are my related questions which I try to ask in a different perspective:
1. Should we keep the *whole* data model graph in the object domain model (POJOs)?
If the anser is yes, remote client loading will be problematic, no matter if it is lazy or eager loading. For lazy loading, currently, jboss and EJB3 spec doesn't support lazy loading from remote, you will have to implement a client proxy on your own. For eager loading, well, you don't really want to send remote clients the whole data graph, right?
If the answer is no for *whole* graph, but just keep small graph, ie, object domain model keeps only limited relationships within a small section of the graph, leave majority of relationships not linked. This way, the eager fetching is not too bad, since your object graph is very limited. Then, what is the logical way of breaking up those relationships in the object model? A good example will be very helpful. On the other hand, then, you will need code logic to keep these not linked relationships, kind of hacking to me.
2. The other extreme is not to keep relationships at all, not even in the server entity beans, hence, no relationship problems to solve. Well, the relationships are now in your bean methods, weaved with all the table IDs. In this case, what is the point of using CMP, besides another way of writing SQL statements?
3. In EJB2 and earlier, the DTO (or value objects) is the only way to go. What new value does ejb3 bring us, as far as the remote clients are concerned?
I guess a remote client lazy loading mechanim is not going to solve the whole issue, because then, the remote clients will have to manage the relationships, make sure they are in sync with the server. Comments?
Yet another solution is to mix the fetch strategy between eager and lazy loading. If you need a larger portion of the graph use "FETCH JOIN" within your query, or do a little more work in your DAO.
Glad to see you in the JavaOne. You guys were amazing, all standing up on your Pavilion booth answering questions! Really appreciate that.
Thanks for the quick reply. The mix of lazy and eager is definitely a good way to do it, if we can make it work. First, how is the lazy part work in remote clients? May be this is the part we should not lazy at all, ie, leave anything lazy to the server.
I guess I will need to start from beginning, a model to discuss with. Let's assume a data model hierachy:
Category ----< Item ----< Bidder ----< User ----< Address
ie, the example used in Hibernate tutorial, but assuming all one to many relationships, and assuming that all the fetch strategy are annotated as lazy.
In the server, if I have a DAO that retrieves a category by
of course, the item Collection in Category object will be just a proxy, not really initialized.
So, if I understand correctly, your suggestion is that depending on who the caller is, if it is a remote client, instead of using the above method, we should use the query
from Category c left join fetch c.items where c.id=?
so that the item collection will be eagerly fetched and sent to remote the client. Now, what about the bidder collection in each of the items? They are still just non-initialized proxies that are useless in remote clients. Actually, at this point, I don't want them to be initialized, otherwise I get a whole graph. I don't want them to give non-initialized exception when remote client access the collection either. How do I have a control at this point?
Having different functions for local clients and remote clients seems not consistent to me. I think I can encaptulate the implementation details in consistent external functions, but then, there will be a lot of them...
I recommend, in the strongest possible terms, that you NEVER, EVER expose your data model to remote clients.
Anything that crosses a network boundary becomes part of the remote interface of that service. Do you really want your database structure forcably tied, by way of serialization, to the API for your service? You'll either end up having to deploy all code to all servers anytime you make any changes, or being unable to make forward-looking changes to your data model because it will break serialization to your clients.
The amount of overhead to copy data into API-specific objects is negligable compared to the cost of a remote invocation. The beneft is that you can alter your data model at will without affecting remote clients. If you're smart with your interfaces, you can even guarantee backwards compatibility with old clients as you evolve the interface.
IMNSHO, this is the biggest common mistake people make with EJBs. Compile your interfaces (and all serialized classes that are part of the interfaces) separately and put them in a separate JAR.
Thanks for your idea and strong suggestion. You got a point, encaptulating internal data structure is very important. The up sides are numerous, the down sides are
(1) as you mentioned, copying data back and forth in between client/server communication, just like ejb2 does;
(2) now you have two object model to maintain, one in the server which is closely related to the database data model, and the other is on the wire, that is common to both client and server for communication.
What is the ejb3's benefit now?
What is the ejb3's benefit now?
Benefit as opposed to what? Writing your own component model that handles clustering (for load and failover), declarative transaction and security context propagation, RMI/SOAP/IIOP invocation (and the ability to easily create custom invokers), lifecycle, persistence... ?
If you mean benefit as opposed to EJB2.1, I'd say the core strengths of EJB3 are ease of use, ease of use, and a persistence model that is actually useful.
EJB3 doesn't magically solve age-old architecture problems with static interfaces. Nor will any EJB spec likely ever do so.
Yes, I meant as opposed to ejb2. I read from "Hibernate in Action", and in numberous ejb3 articles (like this one, http://www.jroller.com/comments/raghukodali/Weblog/does_ejb_3_0_really), it seems to me that transferring your data using the domain model objects, which they meant the ejb3 entity beans (or POJOs in hibernate), is the natural way of the ejb3, well, until I ran into this bump!
It is clear to me now that in complex systems, objects that map to the database tables are not the objects that are going to be transfered to the remote clients. That approach only works for simple sytem.
That approach only works for simple sytem
IMHO, that's the crux of it right there. There are an awful lot of people editorializing who have never applied their principals to an environment with more than one server in it.
I used to be a senior engineer at a certain Very Large Game Company. They have, I am fairly certain, the largest production cluster of Java servers existance. There are over a thousand machines in the cluster with at least 100 different types of server. They all communicate amongst each other with Java serialization.
Because of serialization, and because there has been no concrete effort to keep interface classes isolated and stable, all code must be deployed to ALL appservers simultaneously... all thousand machines. Gack.
You don't want to go there... keep your interfaces well defined and independent of your implementation. Compile them separately to make absolutely certain.
BTW, I think there is a big difference between interface classes and DTOs. A DTO is something that mirrors your data model. An interface class is something that provides a client interface. You don't need to keep interface classes in sync with your data model. You only change an interface class if the client needs different data.
Sorry, you hit a sore spot :-)