8 Replies Latest reply on Feb 2, 2006 2:10 PM by hmesha

    JDBCCacheLoaderTest run

    hmesha

      Hi all,

      Discussion on running JDBCCacheLoaderTest class manually or during the functional test automated run.

      This test case requires access to a DBMS and JDBC driver. Given the requirement, we have 2 options:

      1) Run the test case manually by configuring the DBMS in /etc/cache-jdbc.properties file and placing the appropriate jdbc driver jar file in the lib directory

      2) Include an in-memory database engine and JDBC driver. Configure /etc/cache-jdbc.properties file. Let it run automatically (i.e. Derby)

      We have been discussing this on a private email, see below:


      Hany Mesha wrote:
      >
      > Derby jar is 2.0 MB. It can be run embedded mode or as a client
      > server. I'm planning to use it in embedded mode. Embedded mode is an
      > environment in which only a single application can access a database
      > at one time, and no network access occurs. When an application starts
      > an instance of Derby within its JVM, the application runs in an
      > embedded environment. Loading the embedded driver starts Derby.
      >
      > Actually, Derby used to be Informix/IBM CloudScape until IBM donated
      > to Apache which then has evolved to be Derby.
      >
      > I'll start a discussion on the design forum for this issue and let's
      > continue there.
      >
      > - Hany
      >
      >
      > Hany M. Mesha
      >
      > Sr. Software Engineer,
      >
      > Novell exteNd Engineering
      >
      > Toronto, Canada
      >
      > hmesha@novell.com <mailto:hmesha@novell.com>
      >
      > Mobile: 416-456-6945
      >
      > Novell, Inc.
      >
      > Software for the Open Enterprise
      >
      > www.novell.com/open <http://www.novell.com/open>;
      >
      >
      >
      > >>> Manik Surtani <manik@jboss.org> 01/24/06 10:22 am >>>
      > Hmm, this discussion is heading towards a lib-reference dir that
      > contains reference jars for running unit tests but is not shipped
      > with any of the packages... messy though.
      >
      > And how would Derby compare with Hypersonic?
      >
      > Cheers,
      > --
      > Manik Surtani
      > manik@jboss.org
      >
      > Telephone: +44 7786 702 706
      > MSN: manik@surtani.org
      > Yahoo: maniksurtani
      > AIM: maniksurtani
      > Skype: maniksurtani
      >
      >
      > On 24 Jan 2006, at 15:09, Bela Ban wrote:
      >
      > > How big is the Derby DB JAR ? Does it run in-memory only ?
      > >
      > > Hany Mesha wrote:
      > >>
      > >> Hi Manik,
      > >>
      > >> I contacted Ryan Campbell about this issue and he suggested I make
      > >> it that way so the build team will tweak the cache-jdbc.properties
      > >> to the server config in the lab. Also, I brought up this issue in
      > >> the last update meeting we had for the same exact reason that you
      > >> mentioned below. Ben had suggested that I look into making the
      > >> test case run against derby by the default. Although, that will
      > >> add a jar to lib directory in CVS but it's the best option I think
      > >> and I'm actually going to do that. If you prefer to eliminate the
      > >> failure before cutting the DR, let me know and I'll rename the
      > >> test case for the time being until I get to switch it over to
      > >> derby and test it.
      > >>
      > >> Cheers,
      > >>
      > >> -Hany
      > >>
      > >>
      > >> Hany M. Mesha
      > >>
      > >> Sr. Software Engineer,
      > >>
      > >> Novell exteNd Engineering
      > >>
      > >> Toronto, Canada
      > >>
      > >> hmesha@novell.com <mailto:hmesha@novell.com>
      > >>
      > >> Mobile: 416-456-6945
      > >>
      > >> Novell, Inc.
      > >>
      > >> Software for the Open Enterprise
      > >>
      > >> www.novell.com/open <http://www.novell.com/open>;
      > >>
      > >>
      > >> >>> Manik Surtani <manik@jboss.org> 01/24/06 9:57 am >>>
      > >> Hi guys
      > >>
      > >> This test currently fails on lab because of incorrect connection
      > >> details in etc/cache-jdbc.properties and due to a missing MySQL
      > >> driver.
      > >>
      > >> How could we effectively set this up so the tests run without
      > >> adding unnecessary library jars to CVS? With previous JDBC tests,
      > >> we renamed the test to JDBCCacheLoaderTestCase which meant that
      > >> the test never ran when using any of the ant targets, but could
      > >> only be run manually. While this approach is effective, it does
      > >> mean that regression testing the JDBCCacheLoader can get
      > >> overlooked. Any suggestions on how we can achieve this?
      > >>
      > >> Cheers,
      > >> --
      > >> Manik Surtani
      > >> manik@jboss.org
      > >>
      > >> Telephone: +44 7786 702 706
      > >> MSN: manik@surtani.org
      > >> Yahoo: maniksurtani
      > >> AIM: maniksurtani
      > >> Skype: maniksurtani
      > >>
      > >>
      > >>
      > >
      > > --
      > > Bela Ban
      > > Lead JGroups / JBossCache
      > > callto://belaban
      > >
      >
      >


        • 1. Re: JDBCCacheLoaderTest run
          hmesha

          Hi ALL,

          In order to be able to run Derby with JDBCCacheLoaderTest, I need to change the following line JDBCCacheLoader.inserNode method to use blob instead of using longvarbinary. It used to use blob before but was changed for some reason (I suspect when the MarshalledValue object was instroduced to handle de-serialization).

           ps.setNull(2, Types.BLOB);
          // old ps.setNull(2, Types.LONGVARBINARY)
          


          I have tested this change with other DBMS's (MySQL, Oracle, PostgreSQL). It runs fine on the first 2 but it breaks on PostgreSQL. Anyway, PostgreSQL was never tested with JBossCache until recently when I ran the tests during the work on JBCACHE-320 and I posted a workaround on wiki which is to set the jbosscache table column node data type to bytea. which is suited for LONGVARBINARY but not for BLOB.

          So the question is do we support Derby and break PostgreSQL or continue with PostgreSQL and run the test case manually?

          Cheers,

          -Hany



          If no one has objection to the change, I'll go ahead and commit it.

          Cheers,

          Hany

          • 2. Re: JDBCCacheLoaderTest run
            manik

            I thought the SQL column types are specified in the properties passed into the cache loader. Hardcoding something like LONGVARBINARY may break some DBs (like you found out) but may have indeterminate results in other cases.

            One good example is using a BLOB in MySQL, which at face value works fine, but breaks when you have a node that is larger than 4KB. The trick is that with MySQL, the real data type should be LONGBLOB so although BLOBs will work, they can be misleading. (See JBCACHE-320)

            • 3. Re: JDBCCacheLoaderTest run
              hmesha

              The database column types are configurable on start up to create the table in the database but the java.sql.Types.BLOB is not. this is how the JDBCCacheLoader intreacts with the JDBC driver to insert, load, update nodes.

              The only thing this change will affect is how the different JDBC drivers map the java.sql.Types.BLOB to the real database column type in their implementaion. For example, PostgreSQL doesn't have a real data type of blob in the database server but on the other hand they have large object api in their jdbc driver that will allow to store java.sql.Types.BLOB in the database using type bytea or type oid.

              Cheers,

              -Hany

              • 4. Re: JDBCCacheLoaderTest run
                hmesha

                Here's additional clarification to the proposed change:

                BLOB is the intended type in the database column and since we always store HashMap in the node column no matter what then it makes sense to use the same type in the JDBC call on the prepared statement. The only thing that we loose by switching from longvarbinary to blob is the ability to attempt to store alphanumberic value in the node which we don't do.

                Moreover, all of the open source databases like Derby, MySQL and famous vendor DBMS like Oracle work fine with the change. the problem IMHO is more of PostgreSQL one. it's the only one that we have a problem with cause they have their own impl of large objects through their own api. it all comes down to the vendor's impl. of JDBC specs in their driver.

                Cheers,

                -Hany

                • 5. Re: JDBCCacheLoaderTest run
                  manik

                  +1 for going with BLOBs.

                  Seems to be a problem with the postgres driver not following the jdbc spec so it shouldn't be our problem. Hany, you should document this on the wiki page as well so people have a place to look if they experience this problem.

                  • 6. Re: JDBCCacheLoaderTest run
                    hmesha

                    hi all,

                    Jerry has stumbled over another driver that doesn't suport the BLOB data type. It's MS SQL JDBC driver. See JIRA issue JBCACHE-400.

                    The following RDBMS JDBC driver will have a problem with BLOB data types:

                    1. PostgreSQL
                    2. MS SQL
                    3. Sybase

                    According to some readings, SQL99 specs calls for CLOB and BLOB data types but not all the DB vendors follow that spec in implementing their JDBC driver.

                    If you know of others, please let me know.

                    The question is what should we do about this issue?

                    One solution that I have is to figure out the driver being loaded and assign the right data type to the node column during startup of the JDBCCacheLoader.

                    Cheers,

                    Hany

                    • 7. Re: JDBCCacheLoaderTest run

                      I've tested my SQL Server 2000 installation with the old Microsoft driver as well as the new SQL Server 2005 driver which is backwards compatible with SQL Server 2000. The problem I reported in JBCACHE-400 only occurs with the older driver.

                      If you want to use the database driver name to distinguish, here are the metadata values that are emitted by the two Microsoft drivers.

                      Database name = "Microsoft SQL Server". (both drivers with SQL Server 2000)

                      Driver name = "SQLServer" for the older Microsoft JDBC driver

                      Driver name = "Microsoft SQL Server 2005 JDBC Driver" for the 2005 driver.

                      • 8. Re: JDBCCacheLoaderTest run
                        hmesha

                        All done and checked in aganist JBCACHE-400. I added logic to handle SQL server driver older than 2005 and PostgreSQL as well.

                        Thanks Jerry for catching this one.

                        Hany