-
15. Re: Client failover with static clusters in HA mode
ataylor May 31, 2013 2:06 PM (in response to hyrax)'m actually using jms url to connect my client app to hornetQ server, like jms://localhost:5445. Is that a problem?
there is no such thing as a JMS url, if you mean you are using JNDI then you should use the host and port of the jndi server, 1099 by default i think.
-
16. Re: Client failover with static clusters in HA mode
hyrax May 31, 2013 2:17 PM (in response to ataylor)Maybe I was using the wrong terminology, sorry.
When I said JMS url, I meant creating the JMS ConnectionFactory object without using JNDI. Please refer to <7.6. Directly instantiating JMS Resources without using JNDI> in HornetQ manual.
There's also a concrete exmple in <Chapter 16. Configuring the Transport> .
Hope this will help.
Hyrax
-
17. Re: Client failover with static clusters in HA mode
ataylor May 31, 2013 2:30 PM (in response to hyrax)if you are using the direct method of creating the factory then you also need to configure as in the config file, so factory.setHA(true) etc etc
-
18. Re: Client failover with static clusters in HA mode
hyrax May 31, 2013 2:49 PM (in response to ataylor)Hi Andy,
Thanks a lot for your quick response.
This is how I create the factory:
Map<String, Object> connectionParams = new HashMap<>();
connectionParams.put(TransportConstants.PORT_PROP_NAME, port);
connectionParams.put(TransportConstants.HOST_PROP_NAME, host);
TransportConfiguration transportConfiguration = new TransportConfiguration(NettyConnectorFactory.class.getName(), connectionParams);
return HornetQJMSClient.createConnectionFactoryWithHA(JMSFactoryType.CF, transportConfiguration);I think
createConnectionFactoryWithHA
will do the job in the case, right?Thanks,
Hao
-
19. Re: Client failover with static clusters in HA mode
hyrax Jun 20, 2013 10:44 AM (in response to hyrax)I finally figured it out by myself.
To implement the static cluster, in hornetq-configuration.xml, you can remove the 'discovery group' part entirely, just add a connector/acceptor pair (let's call it 'netty') for client's connections, another pair (let's call it 'netty-live' if this is the live server) for the backup server's connection and a connector (let's call it 'netty-backup' if this is the live server) for the live connecting to the backup, then refer 'netty-backup' in the 'static-connectors' section. Do the similar thing to the backup sever and in hornetq-jms.xml, you just need to refer the 'netty' connector and that's it.
Hope this helps,
Hyrax
-
20. Re: Client failover with static clusters in HA mode
dukechandu Feb 3, 2017 1:47 PM (in response to hyrax)HI,
if possible could you please post the configurations, i am also facing similar kind of issue.
Thanks in advance
-
21. Re: Client failover with static clusters in HA mode
jbertram Feb 3, 2017 7:46 PM (in response to dukechandu)I recommend you start a new thread and describe your use case and what exactly you need. Commenting on old threads is rarely a good idea.
-
22. Re: Client failover with static clusters in HA mode
hjy Mar 8, 2017 2:45 AM (in response to jbertram)Hi Justin,
I use data replication of HA modes.
I use jndi to lookup ... I set both live and backup server url in provider url in jndi properties. My application connect to live server and run well. Then I shutdown live server. Backup server startup and messages in live server replicated to backup server. But in my application side, it cannot connect to the backup server again.
below is the stack trace:
Caused by: javax.jms.JMSException: Failed to create session factory
at org.hornetq.jms.client.HornetQConnectionFactory.createConnectionInternal(HornetQConnectionFactory.java:587) ~[hornetq-jms-client-2.3.25.Final-redhat-1.jar!/:2.3.25.Final-redhat-1]
at org.hornetq.jms.client.HornetQConnectionFactory.createConnection(HornetQConnectionFactory.java:107) ~[hornetq-jms-client-2.3.25.Final-redhat-1.jar!/:2.3.25.Final-redhat-1]
Caused by: org.hornetq.api.core.HornetQNotConnectedException: HQ119007: Cannot connect to server(s). Tried with all available servers.
at org.hornetq.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:902) ~[hornetq-core-client-2.3.25.Final-redhat-1.jar!/:2.3.25.Final-redhat-1]
at org.hornetq.jms.client.HornetQConnectionFactory.createConnectionInternal(HornetQConnectionFactory.java:583) ~[hornetq-jms-client-2.3.25.Final-redhat-1.jar!/:2.3.25.Final-redhat-1]
at org.hornetq.jms.client.HornetQConnectionFactory.createConnection(HornetQConnectionFactory.java:107) ~[hornetq-jms-client-2.3.25.Final-redhat-1.jar!/:2.3.25.Final-redhat-1]
-
23. Re: Client failover with static clusters in HA mode
jbertram Mar 8, 2017 9:12 AM (in response to hjy)Did you not see my previous message?
I recommend you start a new thread and describe your use case and what exactly you need. Commenting on old threads is rarely a good idea.
-
24. Re: Client failover with static clusters in HA mode
hjy Mar 16, 2017 11:50 PM (in response to jbertram)Hi Justin,
I meet a new problem now.
I found the message consumer failed to start up after fail over. And find duplicate message after fail over.
I run a application connect to live and backup sever. There are 12 consumers on MFQueue in live hornetQ server. I send 600 messages to MFQueue. When the queue is processing, I shutdown live server and backup server become live one.But I found only 6 consumers on MFQueue in backup server and I found 608 messages consumed at last. So there are 8 duplicate messages and 6 consumer lost after fail over. Can you help to check my configuration and advise on it. Thanks.
<subsystem xmlns="urn:jboss:domain:messaging:1.4">
<hornetq-server>
<persistence-enabled>true</persistence-enabled>
<cluster-user>hornetqclusteruser</cluster-user>
<cluster-password>hornetq12£</cluster-password>
<shared-store>false</shared-store>
<journal-file-size>102400</journal-file-size>
<journal-min-files>2</journal-min-files>
<check-for-live-server>true</check-for-live-server>
<connectors>
<netty-connector name="netty" socket-binding="messaging"/>
<netty-connector name="netty-throughput" socket-binding="messaging-throughput">
<param key="batch-delay" value="50"/>
</netty-connector>
<in-vm-connector name="in-vm" server-id="0"/>
</connectors>
<acceptors>
<netty-acceptor name="netty" socket-binding="messaging"/>
<netty-acceptor name="netty-throughput" socket-binding="messaging-throughput">
<param key="batch-delay" value="50"/>
<param key="direct-deliver" value="false"/>
</netty-acceptor>
<in-vm-acceptor name="in-vm" server-id="0"/>
</acceptors>
<broadcast-groups>
<broadcast-group name="bg-group1">
<socket-binding>messaging-group</socket-binding>
<broadcast-period>5000</broadcast-period>
<connector-ref>
netty
</connector-ref>
</broadcast-group>
</broadcast-groups>
<discovery-groups>
<discovery-group name="dg-group1">
<socket-binding>messaging-group</socket-binding>
<refresh-timeout>10000</refresh-timeout>
</discovery-group>
</discovery-groups>
<cluster-connections>
<cluster-connection name="my-cluster">
<address>jms</address>
<connector-ref>netty</connector-ref>
<discovery-group-ref discovery-group-name="dg-group1"/>
</cluster-connection>
</cluster-connections>
<security-settings>
<security-setting match="#">
<permission type="send" roles="admin guest"/>
<permission type="consume" roles="admin guest"/>
<permission type="createNonDurableQueue" roles="guest"/>
<permission type="deleteNonDurableQueue" roles="guest"/>
<permission type="manage" roles="admin guest"/>
</security-setting>
</security-settings>
<address-settings>
<address-setting match="#">
<dead-letter-address>jms.queue.DLQ</dead-letter-address>
<expiry-address>jms.queue.ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<redistribution-delay>1000</redistribution-delay>
<max-size-bytes>10485760</max-size-bytes>
<address-full-policy>BLOCK</address-full-policy>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
</address-setting>
</address-settings>
<jms-connection-factories>
<connection-factory name="InVmConnectionFactory">
<connectors>
<connector-ref connector-name="in-vm"/>
</connectors>
<entries>
<entry name="java:/ConnectionFactory"/>
</entries>
</connection-factory>
<connection-factory name="RemoteConnectionFactory">
<connectors>
<connector-ref connector-name="netty"/>
</connectors>
<entries>
<entry name="java:jboss/exported/jms/RemoteConnectionFactory"/>
</entries>
</connection-factory>
<connection-factory name="XAGenericConnectionFactory">
<connectors>
<connector-ref connector-name="netty"/>
</connectors>
<entries>
<entry name="java:jboss/exported/jms/XAGenericConnectionFactory"/>
</entries>
<ha>true</ha>
<block-on-acknowledge>true</block-on-acknowledge>
<retry-interval>1000</retry-interval>
<retry-interval-multiplier>1.0</retry-interval-multiplier>
<reconnect-attempts>10</reconnect-attempts>
</connection-factory>
<connection-factory name="GenericConnectionFactory">
<connectors>
<connector-ref connector-name="netty"/>
</connectors>
<entries>
<entry name="java:jboss/exported/jms/GenericConnectionFactory"/>
</entries>
<ha>true</ha>
<block-on-acknowledge>true</block-on-acknowledge>
<retry-interval>1000</retry-interval>
<retry-interval-multiplier>1.0</retry-interval-multiplier>
<reconnect-attempts>10</reconnect-attempts>
</connection-factory>
<pooled-connection-factory name="hornetq-ra">
<transaction mode="xa"/>
<connectors>
<connector-ref connector-name="in-vm"/>
</connectors>
<entries>
<entry name="java:/JmsXA"/>
</entries>
</pooled-connection-factory>
</jms-connection-factories>
<jms-destinations>
<jms-queue name="MFQueue">
<entry name="java:jboss/exported/MFQueue"/>
<durable>true</durable>
</jms-queue>
<jms-queue name="MF.Queue.NA">
<entry name="java:jboss/exported/MF.Queue.NA"/>
<durable>true</durable>
</jms-queue>
<jms-queue name="MFQueueEXCEPTION">
<entry name="java:jboss/exported/MFQueueEXCEPTION"/>
<durable>true</durable>
</jms-queue>
</jms-destinations>
</hornetq-server>
</subsystem>