Can you be more specific? SwitchYard itself doesn't do any for clustering on HTTP and JMS. On HTTP it's load balancer kind of thing and on JMS you'd look at broker side.
do you mean for http we might need to have an external load balancer ?is it something available in jboss eap subsystem?
When I was using mod_jk it was a Apache HTTP server plugin and not a EAP bits. The mod_cluster may be a same kind of stuff, but not sure. Definitely it's not in SwitchYard.
For JMS cluster just look at configuration of messaging subsystem in standalone-full-ha.xml. When you start 2 servers with cluster user password set to be the same on both of the servers then HornetQ forms JMS cluster. Remember that JMS has cluster is just for JMS to load-balance messages. It does not provide HA.
Thanks for the reply,Is there any way we can achieve HA with JMS.?
Not sure if you have WF10 (with Artemis) or older versions of WF8/9(with HornetQ).
Nevermind the basic principle is the same. Both of them are using active-passive topology. Which means there is one server active called live which serves all clients and then one server passive called backup which is just checking whether live is still alive. Backup behaves like it'd be dead, no client can connect to it. But if live is crashes then backup activates and clients failover to it.
There are 2 ways to have HA with HornetQ/Artemis:
a) Shared store: Live and Backup has the same journal directory. Usually it's located on NFSv4 or GFS2 on SAN. It's supported just on RHEL machines but it might work on other machines as well. Live has file lock on this journal so backup can't write into it. Once this file disappear (=means live is dead) backup will lock the journal and activates. The important thing here is that live and backup has the same journal directory.
b) Replicated journal: There is no shared journal directory. Live simple replicates every message by network to backup. So live has its own journal and backup also has its own journal which is synchronized with live server. Once backup looses network connection to live but still can see more than half of the servers in cluster then it activates.
Both of the ways lowers performance. Having journal on NFSv4 is slower then on local disc in case of shared store. Replication suffers by network round trip times between live and backup.
You can find quite useful configuration guide for HornetQ in EAP 6 guide: https://access.redhat.com/documentation/en-US/JBoss_Enterprise_Application_Platform/6.4/html/Administration_and_Configuration_Guide/sect-High_Availability.html
Name of the attributes for HornetQ are still the same.
I have a scenario where i have two physical servers with a jboss FSW instance each.I need to deploy a soap service in both the instance of Jboss FSW.
When the client triggers request to this soap service it should load balance across both the instances. How will i need to configure to load balance and How can i acheive high availability.
Can you please explain me bit more detail as the documentation is not clear.
You would want to ask in Apache HTTPD community or Red Hat support. I think there's very few people who knows about external load balancer well here in SwitchYard community.
Can you please let me know if we would need to make any changes in application level to deploy it as clustered application in case of soap binding.
I don't think there's anything specific to clustering from SwitchYard SOAP binding perspective.
Please confirm if the below is correct.
soap application is developed as a standard application and deployed in different jboss servers.
Apache httpd server(mod_cluster/mod_jk) is configured to connect to these jboss servers and client is supplied with the apache httpd url to consume the webservice.
I think Tomo's right that you want to ask Red Hat support or in the Apache httpd community (or maybe the WildFly forum). This isn't a scenario we have a lot of experience in.
If there's a specific issue in a SwitchYard application re: clustering, we're definitely here to help.