'Ceteris Paribus', is a Latin phrase most of us learned while studying Economics, and is normally defined as "all else being equal". Commonly it is used with respect to the law of supply and demand, 'If the price of BEA licensing would decrease - ceteris paribus - more people will buy BEA licensing.' Of course this statement assumes "all else being equal", and does not take in to account: substitute goods (JBoss, anyone?), macro-economic variables, ridiculous comments on TSS by BEA executives, etc. This is how economic theories are normally discussed, as there can be a myriad of variables to affect the demand/supply of any good/service.


So why the quick lesson in fundamental economics? Is it a primer for a blog on Econometrics? Thankfully, no (I'd like to forget those days). It is meant to define how a cause-and-effect relationship *can* *be* isolated from external influences, something that seems to be lacking in the world of benchmarking.


We all should be familiar with benchmarks, by now. They're those cute little chart-and-graph reports that corporations release periodically to show their product towering over their competitors'. Normally, they're about as objective as the tobacco industry, sponsoring studies, showing how cigarettes don't kill. Clearly, whoever is running the test, has the ability to tweak/tune/hack things in such a way to achieve the desired results. From where I sit, they are nothing more than marketing material/fluff/fud/poop pieces.


Every once in a while, a benchmark is released by a third-party, that would seemingly have nothing at stake with any player in the study 'winning'. This week, we have a study by eweek. JBoss Portal did very well in this 'study'...


On Average Transactions Per Second:


On Average Document Download Time:


The problem with this study, however, is that the idea of Ceteris Paribus is not observed. From their platform matrix, I see a mish-mash of stacks - different OSes, different DBs, JVM?. And then the question, 'Which Portal is the most performant'? If the benchmark implementors had a clue on some basic scientific principles, we would see identical stacks compared - one for the Windows side and one for the Linux side.


Ceteris Paribus breaks down in this study, as...

  1. Portals tend to sit at the top of the stack, and are influenced by everything that sits under it. (This is why we leverage JBoss JEMS components, so we don't have any 'frankenstein parts' as our underpinnings.) So all underlying components must be identical: OS, DB, Network usage, etc...
  2. Bundled portlets are never identical. A MS Exchange Portlet will take much longer to execute via WS than a simple cached HelloWorldJSP Portlet.
  3. Who was the genius that thought of making every portal communicate via network (slow) to MySQLDB, except just one of them accessing an in-memory DB?


I'm not complaining about the results, although - Ceteris Paribus - JBoss Portal would be the clear winner. ;-) However, if benchmarks are a necessary evil we have to deal with in our industry for people to be able to sell ads and steer prospects in their direction, can we agree to observe some basic (very basic) scientific priniciples?


Roy Russo