The current approach to configuring Hibernate is java.util.Properties based. Basically you throw a bunch of key-value pairs at it and it configures itself. But such an approach has a number of limitations.
The biggest issue to my mind is the lack of a well defined lifecycle to Configuration. For example, currently a Dialect is not given to the code that processes mapping information because the Dialect may or may not be known at that moment; this is why Hibernate currently cannot automatically quote identifiers that the dialect reports as keywords or reserved words (let alone java.sql.DatabaseMetaData#getSQLKeywords()).
Another concern is the inability to pass in fully configured instances of stuff. This is mostly an integration concern. As examples, consider a ConnectionProvider or a TransactionFactory where you might like to construct the instance yourself and hand that instance in for Hibernate to use.
A running list of use cases I have seen used with the old Configuration that we should try to continue to maintain in the new APIs if at all possible:
- Applying schema to multiple databases by process of defining mappings, then
- applying settings (including jdbc info) and running schema export
- changing settings (jdbc info) and again running schema export
I see 2 alternatives here, either a sequential set of steps or a parallel set of steps
In this approach we'd have one object to define "settings" and another to apply metadata. Users would obtain the later from the former (forget the current role of these object names for the time being):
Settings settings = new Settings( connectionProvider ); // overloaded constructors settings.setSqlLogger( myLogger ); ... Mappings mappings = settings.buildMappings(); mappings.addClass( SomeEntity.class ); mappings.addResource( "SomeOtherEntity.hbm.xml" ); ... mappings.buildSessionFactory();
I also like the idea of Settings being an interface. Really it is just meant to provide access to the components used by Hibernate (ConnectonProvider, TransactionFactory, etc). The important piece here though is the sequence of steps the API enforces.
In the approach we'd again have 2 separate objects (one for settings, one for metadata), only here we would bring them together to build a SessionFactory. Couple of options to how this might look:
Settings settings = new Settings(...); ... Mappings mappings = new Mappings(...); ... // one option mappings.buildSessionFactory( settings ); // another settings.buildSessionFactory( mappings ); // still another SessionFactoryBuilder.buildSessionFactory( settings, mappings );
The real distinction between the 2 approaches is that in the first, when applying the metadata we can be certain of knowing the dialect (and perhaps even having access to the Connection in most cases). This has the distinct advantage of allowing us to leverage this information as we build the metadata to incorporate all kinds of information (imagine being able to automatically quote dialect/connection reported keywords used as table/column names, applying various data type constraints as reported by the dialect/connection, being able to reference "hql functions" in mappings, etc.).
In the case of the second approach we do not have that luxury (very much like what we have today in that regard in fact). There we'd either have to live with that or build a parallel set of metadata (a "raw" model and a "resolved" model)