0 Replies Latest reply on Jun 22, 2003 7:55 PM by AndrewGoedhart

    cmp read ahead caching does not perform as expected with lar

    AndrewGoedhart Newbie

      I have been doing some testing on the read ahead caching implmentation in JBOSS and there seems to be performance problems with large result sets.

      I am using Jboss 3.2.1 with tomcat. the Vm was run with the -server option as this gave the best average times for the tests. The memory size was set to 256M and the Garbage collector never reported going over 128 megs. The database is firebird 1.5 with the pure Java JDBC drivers with version number 1.0 The database is local to the machine running the VM. The machine has 512Meg DDR and the db has the default cache settings. No swapping ever occurs during the tests.

      I created 10 000 simple beans,(The bean has 4 cmp fields 3 long, 1 timestamp)
      There is a find all that returns all 10 000 beans

      I call the findall inside a user demarcated transaction, and then iterate over the resultant collection calling a single field from each bean. This is done 20 times and the result averaged. The transaction is commited after each iteration and a new one started. Depending on the setting I get the following.

      Commit.. Read...... Page....Type...... Ave.....min..... max
      Option... ahead.....size...................Time....time.... time
      B.............1000......1000....on-load... 8.4.....7.1......25.3
      A.............1000......1000....on-load..11.1....6.4......22.2
      B.............10000....1000....on-load....8.2....7.0......10.6
      A.............10000....1000....on-load..13.1....6.4......23.3
      b.............10000....10000..on-load....7.8....6.0..... 22.5
      A.............10000....10000..on-find.....7.6....6.5..... 25.1
      b.............10000....10000..on-find.....7.3....6.5......6.5
      A.....................1...........1..on-load.....1.8....1.4.....11.2
      A.....................1....1000...on-load ....1.7....1.3.....10.4
      A.....................1...........1..on-load.....1.7....1.3.....10.4
      A.....................1..10000...on-load ....1.7....1.3.....10.4
      A.....................1...........1..on-find......1.7....1.3.....11.2
      B.....................1..10000...on-find......1.4....1.3.....10.1
      B.....................1...........1..on-find......1.5....1.3.....10.1

      The average is the average with the worst and best times removed.
      A few points to note:
      1.) Garbage collection is not a factor as I ran the Vm with verbose:gc and the maximum GC in any one iteration was under 1 second, under 0.2 seconds for the ones with read ahead = 1
      2.) When running the client VM, things are even mpre pronounced, ran these test twice each just to make sure that I was getting the right stuff:

      Commit.... Read.... Page....type.......Ave..... min... max
      .Option.....ahead...size...................Time.....time...time
      .....A........10000....10000.on-find.....11.0....3.4....25.1
      .....B........10000....10000.on-find.....11.4....3.4....23.2
      .....B................1...........1..on-find.......1.5....1.5......2.8
      .....B................1....10000.on-find.......1.5....1.5......3.4
      .....A................1...........1..on-find.......1.2....1.1......3.5
      .....A................1....10000.on-find.......1.2....1.2......3.5
      .....A................1....10000.on-load......1.3....1.2......3.6

      Few questions:
      1.) Has anyone had a similar experience and can shed some light on what is happening here ????
      2.)What exactly happens with the on-load setting. It seems that the system is not doing what it is supposed to. In my understanding on load means that the bean is populated when being accessed for the first time. But for a commit A option surely this should come from the bean cache. On load performs worse with comit A then B. ?????
      3.) How does the on-find setting work, my understanding from the CMP docs was that the cache-list was loaded during the finder and the page size is therefore irrelavent, is this correct?

      4.) Why when the cache list size is set to 1 and the page size 10000 commit option to B does the system out perform everything else on the server Vm and also outperform the commit A with the list -cache enabled in the client VM ? I could not belive this so I re ran this configuration a few times just to make sure. It has a high initial time at the start of the run and then drops rappidly to the around the average after a few literations.

      5.) What exactly happens when I set the cache list to 1?
      Any ideas as to what is happening here ?

      I though I knew what was going on with the CMP read ahead caching, now I am not so sure. Any one have any pointers as to where in the CMP code I should start looking. Had a look at the ReadAheadCache class in ..plugins.cmp.jdbc package but everything looks okay.
      --- > suggestions????