- 
        1. Re: AWS ElastiCachepferraro Oct 6, 2017 8:21 AM (in response to alessandromoscatelli)I'm confused. Why can't you use "any cluster mode or configuration" for your WF servers? Why would deploying on AWS prohibit you from using WF clustering? 
- 
        2. Re: AWS ElastiCachealessandromoscatelli Oct 6, 2017 9:55 AM (in response to pferraro)Because Beanstalk is AUTOscaling, it may create a new wildfly instance, starting from a Docker image, any time and without you knowing. 
 Every node in beanstalk is equal, there is no master, no slave, no different configuration.Are you saying that is possible to create a wildfly cluster made of EQUAL wildfly instances in domain mode with the same domain.xml file with NO differences in EVERY configuration ? 
- 
        3. Re: AWS ElastiCachealessandromoscatelli Oct 6, 2017 10:08 AM (in response to pferraro)This guy explained the problem in using wildfly clusters in an autoscaling cloud but nobody answered him : Configure Wildfly Domain Cluster on AWS with Autoscaling I already figured out this is not possible, so I advanced to the next step : Since I cannot have a WF Cluster made of equal WFs, how can I save session and caches (with or without infinispan) in an external (and scaling) Redis or Memcached ? 
- 
        4. Re: AWS ElastiCachepferraro Oct 6, 2017 2:27 PM (in response to alessandromoscatelli)Yes - so long as each standalone instance is allocated a unique host name. You would configure your JGroups subsystem to utilize the S3_PING protocol, which utilizes shared bucket storage for discovery. This configuration would be common for all standalone instances. 
- 
        5. Re: AWS ElastiCachealessandromoscatelli Oct 9, 2017 4:31 AM (in response to pferraro)This seems like very good news I'll try and let you know if everything works out. 
- 
        6. Re: AWS ElastiCachealessandromoscatelli Oct 9, 2017 5:34 PM (in response to pferraro)I tried S3_PING but it's bugged and it won't work with S3 Amazon API anymore (if you are using authentication as you should) : [JGRP-1914] S3_PING doesn't work with S3 buckets created in Frankfurt region - JBoss Issue Tracker The expected fix is using NATIVE_S3_PING but you can't use it yet in wildfly and there is a feature request about it already : [WFLY-8770] Integrate NATIVE_S3_PING discovery protocol - JBoss Issue Tracker Now I am trying to use JDBC_PING (is even better imho) but it is also bugged : Is there way to ensure that one subsystem is fully initialized before another one? I have exactly the same issue using both WF10.1 and WF11RC1 so I don't think it has ever been fixed. I filed a new bug : [WFLY-9427] JDBC_PING gets a NameNotFoundException when using jndi resource - JBoss Issue Tracker This is starting to be a little frustrating Please let me know if you can help me Thank you in advance 
- 
        7. Re: AWS ElastiCachepferraro Oct 11, 2017 3:23 AM (in response to alessandromoscatelli)NATIVE_S3_PING is usable in WF11, however, we have not yet packaged it as a module (WFLY-8770). You can do this manually, and specify your it using your module name: <protocol type="org.jgroups.aws.s3.NATIVE_S3_PING" module="..."/> JDBC_PING is indeed working in WF11.CR1, but let's figure out your issue on the JIRA. 
- 
        8. Re: AWS ElastiCachealessandromoscatelli Oct 12, 2017 11:29 AM (in response to pferraro)I managed to have an autoscaling Wildfly standalone-full-ha cluster in AWS Beanstalk (Multicontainer Docker) with this configuration : <stack name="jdbc"> <transport type="TCP" socket-binding="jgroups-tcp"> <property name="external_addr">${env.EC2HOSTNAME}</property> </transport> <protocol type="org.jgroups.protocols.JDBC_PING"> <property name="connection_url">connection</property> <property name="connection_username">username</property> <property name="connection_password">password</property> <property name="connection_driver">com.mysql.cj.jdbc.Driver</property> </protocol> <protocol type="MERGE3"/> <protocol type="FD_SOCK"> <property name="external_addr">${env.EC2HOSTNAME}</property> <property name="external_port">57600</property> <property name="start_port">57600</property> <property name="port_range">1</property> </protocol> <protocol type="FD"/> <protocol type="VERIFY_SUSPECT"/> <protocol type="pbcast.NAKACK2"/> <protocol type="UNICAST3"/> <protocol type="pbcast.STABLE"/> <protocol type="pbcast.GMS"/> <protocol type="MFC"/> <protocol type="FRAG2"/> </stack> I set EC2HOSTNAME environment variable at the beginning of standalone.conf (inside wildfly bin folder) with : 
 # EC2 METADATAexport EC2HOSTNAME="$(curl http://169.254.169.254/latest/meta-data/local-hostname)" I'd like to use JNDI resource instead of direct connection, like I said in the bug I filed (I'll go and make some new test as soon as possible) Now I am having problems with ActiveMq (I disabled it, but I want to renable it) : 2017-10-12 10:47:49,326 ERROR [org.apache.activemq.artemis.core.client] (Thread-0 (ActiveMQ-scheduled-threads)) AMQ214016: Failed to create netty connection: java.net.UnknownHostException: b9ce94d895f8 Where b9ce94d895f8 is one of the Docker container hostname. 
 I'd like to have ActiveMq to use JGroups for discovering and resolving addresses.I'll look some more for solutions. 
- 
        9. Re: WF11 Standalone-Full-HA Clustering on AWS Beanstalkalessandromoscatelli Oct 13, 2017 4:16 AM (in response to pferraro)Ok I closed the JIRA issue because I tested the <jdbc-protocol type="JDBC_PING" data-source="db_optoplus"/> configuration on a clean wildfly and now it is working. 
 Thank for your support.Now I am using the JNDI configuration because it's better for me. 
 Can you help me with the ActiveMQ problem too ?I'd like to have ActiveMQ using the same jgroups info I used for cluster discovering. 
 Now ActiveMQ is trying to ping directly the wrong hostname (che other containers name).
 I want ActiveMQ to use the external address and port I wrote in the ping_data with JDBC_PING for the 'ee' jgroups channel.Is it possible ? 
- 
        10. Re: WF11 Standalone-Full-HA Clustering on AWS Beanstalkpferraro Oct 14, 2017 4:35 AM (in response to alessandromoscatelli)Yes - you can configure activemq broadcast/discovery to share the same jgroups channel, which will then use the same JDBC_PING configuration. e.g. <broadcast-group name="bg-group1" connectors="http-connector" jgroups-channel="activemq-cluster"/> <discovery-group name="dg-group1" refresh-timeout="1000" jgroups-channel="activemq-cluster"/> 
- 
        11. Re: WF11 Standalone-Full-HA Clustering on AWS Beanstalkalessandromoscatelli Oct 16, 2017 4:50 AM (in response to pferraro)I am already using this configuration : <broadcast-group name="bg-group1" jgroups-channel="activemq-cluster" connectors="http-connector"/> <discovery-group name="dg-group1" jgroups-channel="activemq-cluster"/> java.net.UnknownHostException: b9ce94d895f8 "b9ce94d895f8" is the name of the Docker container and hostname for Wildfly. 
 So it seems it's not using the external address I configured with :<transport type="TCP" socket-binding="jgroups-tcp"> <property name="external_addr">${env.EC2HOSTNAME}</property> </transport> Is there something else I can try ? 
- 
        12. Re: WF11 Standalone-Full-HA Clustering on AWS Beanstalkalessandromoscatelli Oct 16, 2017 9:21 AM (in response to pferraro)Here are the logs of the 3 standalone-full-ha wildfly I can't understand why they try to resolve the hostname instead of using the external address ... - 
            
                            
            cluster.zip 503.1 KB
 
- 
            
                            
            
- 
        13. Re: WF11 Standalone-Full-HA Clustering on AWS Beanstalkpferraro Oct 16, 2017 9:45 AM (in response to alessandromoscatelli)Why would you need to specify an "external_addr"? This is only needed when you have members on different networks, where the external_addr points to the address of the router, when the interface address to which the member is bound isn't visible to other members of the cluster. 
- 
        14. Re: WF11 Standalone-Full-HA Clustering on AWS Beanstalkalessandromoscatelli Oct 16, 2017 10:10 AM (in response to pferraro)Because my WF are started inside Docker Multicontainers (that's how Elastic Beanstalk scales out automatically) : 
 So for example Container 1 from Instance 1 contains Wildfly, and Container 1 from Instance 2 also contains Wildfly.
 Instance 1 can send packets to Instance 2, but Containers from different instances are not in the same network and so they can't speak between them, so I have to use external address in JGroups TCP Protocol to enable them to reach each other.
 You can find my JGroups configuration a few reply above ...So basically every Wildfly is behind a different NAT/router (the host machine, the instance of EC2). External address is the address of EC2/instance for each wildfly. 
 
     
    