manik.surtani@jboss.com
Sure thing. Where do you want me to post my findings? This topic, a new topic, or the 1.2.4 final topic?
I would actually recommend another topic, specific to Optimistic Locking in 1.2.4 FINAL.
Thanks!
Manik
"bstansberry@jboss.com" wrote:
Motormind,
You've been so helpful in the beta, we wanted to find a way to get Map.values() working for you in 1.2.4. So, we pushed that one forward from 1.3 and its fixed in CVS. See
http://jira.jboss.com/jira/browse/JBCACHE-342
Thanks much for all your helpful posts.
Hi All,
I have been testing JDBCCacheLoader with various DBMs, particularly, MySQL Versions (3.23.58, 4.1.1.0a-Max, 5.0.16), Oracle versions (9i release 2, 10g release 2), and PostgreSQL 8.1. I have found that you don't need to change any code to make it work with MySQL or any other DBMS for that matter.
The reason that you've got this NPE is the size of the object you're trying to store in the cache. If the size of the node is greater than 65k then you'll get this exception thrown. MySQL data type for the data type of blob greater 65k must be defined as MEDIUMBLOB or LONGBLOB. If you need a bigger store for your larger objects, you can add the following line to your JDBCCacheLoader configuration file cache.node.type=LONGBLOB.
see etc/jdbcCacheLoader-service.xml for example of that configuration.
Also, I'd recommand to view the data types in MySQL documentation @ http://dev.mysql.com/doc/refman/5.0/en/data-types.html to see the requirements for different data types in MySQL.
I'm also in the process of creating a wiki page to include all the various dbms mentioned above test condtions to cover all the caveats in using JDBCCacheLoader.
Cheers,
H. Mesha
Hi
For those who reported performance issues with optimistic locking in JBossCache 1.2.4, pls have a look at the current codebase in CVS. It has been heavily optimised with greatly improved performance - particularly when using cache loaders.
Cheers,
Manik