Version 2

    Tuning SPECjAppServer2002


    Tuning Methodology


    Observation and Evaludation Phase


    Most of the work involved in tuning is in recording data and evaluating the next modification to make. The baseline workload and reading give a well defined starting point for a tuning exercise. The baseline reading can be used to gauge progress during a tuning exercise. However, the baseline reading does not have to be used all the time. As better results are obtained, through parameter modification, those results need to be used to gauge progress. So, initially, the baseline reading should be used until positive progress is measured and then the best result should be used to gauge progress.


    With respect to the SPECjAppServer2002 benchmark, the Benchmark Driver has a parameter called the Injection Rate (IR). The IR is used to indicate how much pressure is placed on the benchmark. The IR starts at 1 (the lowest setting) and can grow from there. Typical IR settings, for current published SPECjAppServer2002 results, are in the 100-2000 range.


    When the SPECjAppServer2002 benchmark completes, it will report a number of operations per second, expressed as the metric TOPS (Total Operations Per Second). There is a direct link between the IR and the TOPS reported. There should be ~1.7 TOPS reported per IR. Therefore, for an IR of 100, a result of 170 TOPS should be obtained. It is important to understand this relationship and apply it to the benchmark results. For example, running SPECjAppServer2002 with an initial IR of 50 and obtaining a result of 85 TOPS would mean that the benchmark was performing well (50 x 1.7 = 85) � the IR could then be increased until no further performance is observed. At this point, various parameters or environmental settings could be modified in an attempt to increase performance. In contrast, running SPECjAppServer2002 with an initial IR of 50 and obtaining a result of 40 TOPS would mean that the benchmark was not performing at all and some parameter or environmental corrections need to be made.


    If any monitoring tools are used e.g. Task Manager or PERFMON, they should be used in every run to keep the benchmark environment consistent.