This is an ancient debate, even before we started calling it 'cloud computing'.
Seam supports clustering really well out-of-the-box, but it will not force you to use it appropriately.
For instance, if you really want to force it into something stateless you can with RestEasy deployed dynamically on AMI's based on usage volume.
EC2 Load Balancers have very detailed apis for this purpose. AMI's can be booted/suspended on demand based on load metrics, it just isn't baked into the UX of the Management Console.
This would potentially solve the 'idle' instance issue mentioned in the article.
Just because it has state doesn't mean it is super expensive in the cloud either, so saying that idle instances are the end-of-the-world isn't really so true.
The article mentions 'pay-per-use' billing models, but there are other detailed metrics when charging.
Low In/Out, low CPU usage (server idle), and sometimes memory usage throttles the billing.
I have had EC2 Instances running JBoss 5, with Seam 2.2 and postgres, cost 30 bucks/month when traffic volumes are low. This also applies to 10 servers on the prod stack behind a load-balancer (obviously multiplying the usage cost based on load).
EC2 uses EBS volumes that are redundant, and can be used by multiple AMIs at the same time. These volumes charge by IO. Low DB interaction, affordable cost.
It isn't always just green/red light for billing.
On that same issue in relation to the
server quantityargument, I personally feel that Application State is easier to Cluster than relying on DB Clustering.
DB servers are almost always the bottle neck for large scale
If you go towards the Cloud-based RDBS, you will eat it on performance.
In fact, my experience with S3 or Azure backed REST implementations has been negative. This is because a stateless server interacting with data 'across-the-boundary' typically creates serious Client-UX challenges unless you cache the data somewhere (Server, Client Plugin/JS, etc).
You could implement the client in Silverlight or Flash, but then you have to rely on inconsistent client interaction, based on their available performance. Or, you could introduce the 'grid' between the data and the cloud, which is sort of a very 'controlled' application server that as state (and will have ami's too, that may not go down in low usage hours)
So, the stateful server approach ends up creating less servers with less clustering problems, because of reduced DB Interaction via the caching layer.
Seam has been valuable in my usage because it introduces an intermediate tier of business process permanence that would traditionally be backed by Heavy DB Interaction and Persistent Models in a stateless environment.
The Component Model allows me to only write to the DB what is appropriate for long-term storage, and manage in the Application Tier what is best for the data and the process.
...is there something designed to workaround this problem?
Not really seeing a problem.
If you want to reduce DB load and the cache is valuable: use a stateful framework like seam and you might have some idle server's around.
If you want to reduce Application Instances, don't have state...but you may end of having a bunch of DB cluster's running 24/7 to support this infrastructure (even in non-peek hours)
(man, infinispan looks awesome! thanks for the mention)
Michael Schütz: Everyone is talking Cloud at the moment. Are there plans within the Seam Framework to provide Cloud support in any way?
Pete Muir: Yes. We're starting by providing integration for JClouds and Infinispan (to give you access to Data Grids and blobs stored in the cloud) -- expect an alpha any day. We're also developing demos that show off running Seam on JBoss AS in EC2 (Google App Engine support is also planned). We're using Red Hat developed tools such as Deltacloud, JBoss Tools and SteamCannon to run the JBoss AS instances and deploy the application.
You mentioned some helpful points.
Can you share some details about your experience with Seam 2, jBoss, and PostGress on EC2?
Where was the db stored? How many DB servers?
How many jBoss servers?
Any guides/resources used to help deploy?
Did you have any issues?
What things to watch out for?
I guess that what made things unclear was the statement where Mani said that shrinking back to 1 machine is not possible cause it couldn't not move the sessions from the idle servers.
And that's not entirelly true since there´re softwares that can manage that, like mod-cluster.
Bela Ban made a webinar focusing on this point: http://www.vimeo.com/13180921
on the presentation there's a showcase where he shows the session moving from one server to another easily and cleanly.
I might write a blog about that.
Thank you very much Todd, and I'd love to hear the answers for Zee Zain questions!