Oracle iFS lets you store data in the database (so presumeably it is transactional) but provides a view of the database data that looks like an ordinary file residing on an ordinary volume. I'm not sure this would provide the performance you are looking for but one could try it out to see how well it works.
I believe the most appropriate solution for this is using an MBean to access the filesystem. This would create a clean separation between the dynamic content management and static web page server.
However, I really can't think of a nice way of doing the multiprocess access to the file-system. (Writing from MBean, reading from web server, such as Apache.)
I would seriously consider using a db instead. If you chose a db based on its ability to retrieve large blobs in a short time, it might work out. If I read you correctly, you don't really need things like referential integrity.
And also, do you really need transactions? If atomic updates is all you need, you lose the transaction overhead.
Meanwhile I have two interesting leads.
One is an article from ONJava.com, which forms an initiative for transactional filesystem access. If I choose to implement an MBean, an advanced implementation of this system can be used. It can be found on http://www.onjava.com/pub/a/onjava/2001/11/07/atomic.html .
Another idea is to talk to the webserver by WebDAV, implementing an MBean that manipulates files and handles transaction on a remote web server. However, the WebDAV implementations I know use regular SQL underneath, so there is no real advantage in that; I do suppose other implementations might actually use a filesystem (such as with RCS or CVS). If anyone has any ideas on this, please post them!
I used JNDI with a file system implementation,
File System Service Provider.
It provides a cheap and easy solution, but I don't
know if this is really scalable.
How scalable does this need to be? Do you need transactions? Are JBoss/Jetty (or Tomcat) running on one machine together?
One simple solution we used at my last job was to create an UploadJspHelper.java class that a jsp page could use. According to our config file, it knew where to write the uploaded file. Of course the problem is if you have more than one server, you've either got to have a common file system through NFS or you've got to replicate the files to each machine.
Sounds like oscache is the tool that you need.
Allows you to have whatever dynamic content you need, but cache it on the file system when it is not updated.
There are jsp cache tags & servlet filters for caching other content (eg. Dynamically produced images).
Find it here:
A way in which you could use it, is draw all your content from the database, but cache it using oscache. When the author does an update, flush the key for that cache, and next time the content is requested, it will be retrieved from the database & recached.
Advantages of this are that you can access the same cache from many servers (assuming they are all networked to the fileserver), allowing load balancing.
Disadvantages are that you would have to use a jsp / servlet container to serve your content.
Scott Farquhar :: firstname.lastname@example.org
Atlassian :: http://www.atlassian.com
Supporting YOUR J2EE World
you can use entity beans , and change the persistent layer beneath it.
You can change the persistent engine to be file based.
you still have your transaction being care of by the container.
Why are you going to use EJB's? In your concrete case I find that EJB's would just be non-recommendable. Use them just to access the DB and that's all.
Authors are not going to submit things via more than one channel. Keep atomicity in actions and you don't need transactions. What more?
You could use a text JDBC driver. Is J2EE compliant (datasource). It will be the driver responsability to write to the local filesystem into a csv file for example.