0 Replies Latest reply on Nov 27, 2012 9:12 AM by vineet.reynolds

    Proposal - Interactive datasource debugging

    vineet.reynolds

      The problem

       

      In order to debug the state of your database during test execution, you'll often need to execute a chain of expressions in your remote debugging environment. For instance, you might have a reference to a java.sql.Connection instance in your method on the JVM stack frame, and you may be forced to execute statements like connection.createStatement().execute(sql). And this is a little painful (an underestimate I suppose) if you need to obtain the state of several rows across different tables.

       

      This is complicated by transaction isolation levels. Your datatbase engine may/will hide intermediate changes in the transaction performed by the SUT, from a different database session that you may have started to debug the state of the database. For instance, when you test against a database using READ_COMMITTED or higher isolation levels, you will be unable to read modified data  unless the modifications are committed by the SUT.

       

      It is quite important to debug the state of the database within the current transaction context. I've writen up a very simple Arquillian extension that appends a couple of libraries to the deployment, so that this is possible at a very rudimentary level. The libraries in question are DataStorm and SWT (it's dependency). An example usage is in this branch of Bartosz's BeerAdvisor demo; you'll need to run the BeerRepositoryTest if you want a quick overview.

       

      The proposal

       

      This could be improved several fold:

       

      1. An embedded execution environment. DataStorm is launched in the container, and not managed as a plugin within the IDE. It would be better if we could execute SQL queries from our IDE, and have them transported to the current stack frame on a suspended VM where they would be interpreted and executed by a "helper". The "helper" would be responsible for executing the SQL query and serializing the resultset for consumption at the client/IDE. I believe we wouldnt need DataStorm or SWT if we were to achieve this.

      2. Auto-wiring. Currently, you'll need to provide the DataStorm API with a Connection or a DataSource reference. Irrespective of (1), we should not require a developer to provide us with a Connection or DataSource reference. This may be impossible to obtain if the VM were halted in a method where the reference is not available (like, in a  class from rt.jar); initiating a debugging session at this point might be necessary.

      3. Handle transaction timeouts. Currently, the test fails if the JTA transaction reaper comes around and marks the transaction for rollback if it times out. Most application servers have the timeout set to 30 seconds for JTA transactions. I'm not certain that the database engine would convey the correct resultset once the transaction times out, and therefore surprise developers. Maybe this is acceptable if the reaper thread is suspended in the VM for the first time, in which case a transaction rollback or commit by ATE/APE would fail. But should a developer place two breakpoints, the reaper may execute before the second breakpoint and thus break the debugging workflow.

       

      Note that point (1) could be made extensible to allow for debugging of multiple datasources (through support for other query languages or APIs), not just relational databases.