I'm confused. Why can't you use "any cluster mode or configuration" for your WF servers? Why would deploying on AWS prohibit you from using WF clustering?
Because Beanstalk is AUTOscaling, it may create a new wildfly instance, starting from a Docker image, any time and without you knowing.
Every node in beanstalk is equal, there is no master, no slave, no different configuration.
Are you saying that is possible to create a wildfly cluster made of EQUAL wildfly instances in domain mode with the same domain.xml file with NO differences in EVERY configuration ?
This guy explained the problem in using wildfly clusters in an autoscaling cloud but nobody answered him :
I already figured out this is not possible, so I advanced to the next step :
Since I cannot have a WF Cluster made of equal WFs, how can I save session and caches (with or without infinispan) in an external (and scaling) Redis or Memcached ?
Yes - so long as each standalone instance is allocated a unique host name. You would configure your JGroups subsystem to utilize the S3_PING protocol, which utilizes shared bucket storage for discovery. This configuration would be common for all standalone instances.
This seems like very good news
I'll try and let you know if everything works out.
I tried S3_PING but it's bugged and it won't work with S3 Amazon API anymore (if you are using authentication as you should) :
The expected fix is using NATIVE_S3_PING but you can't use it yet in wildfly and there is a feature request about it already :
Now I am trying to use JDBC_PING (is even better imho) but it is also bugged :
I have exactly the same issue using both WF10.1 and WF11RC1 so I don't think it has ever been fixed.
I filed a new bug :
This is starting to be a little frustrating
Please let me know if you can help me
Thank you in advance
NATIVE_S3_PING is usable in WF11, however, we have not yet packaged it as a module (WFLY-8770). You can do this manually, and specify your it using your module name:
<protocol type="org.jgroups.aws.s3.NATIVE_S3_PING" module="..."/>
JDBC_PING is indeed working in WF11.CR1, but let's figure out your issue on the JIRA.
I managed to have an autoscaling Wildfly standalone-full-ha cluster in AWS Beanstalk (Multicontainer Docker) with this configuration :
<transport type="TCP" socket-binding="jgroups-tcp">
I set EC2HOSTNAME environment variable at the beginning of standalone.conf (inside wildfly bin folder) with :
# EC2 METADATA
export EC2HOSTNAME="$(curl http://169.254.169.254/latest/meta-data/local-hostname)"
I'd like to use JNDI resource instead of direct connection, like I said in the bug I filed (I'll go and make some new test as soon as possible)
Now I am having problems with ActiveMq (I disabled it, but I want to renable it) :
2017-10-12 10:47:49,326 ERROR [org.apache.activemq.artemis.core.client] (Thread-0 (ActiveMQ-scheduled-threads)) AMQ214016: Failed to create netty connection: java.net.UnknownHostException: b9ce94d895f8
Where b9ce94d895f8 is one of the Docker container hostname.
I'd like to have ActiveMq to use JGroups for discovering and resolving addresses.
I'll look some more for solutions.
Ok I closed the JIRA issue because I tested the <jdbc-protocol type="JDBC_PING" data-source="db_optoplus"/> configuration on a clean wildfly and now it is working.
Thank for your support.
Now I am using the JNDI configuration because it's better for me.
Can you help me with the ActiveMQ problem too ?
I'd like to have ActiveMQ using the same jgroups info I used for cluster discovering.
Now ActiveMQ is trying to ping directly the wrong hostname (che other containers name).
I want ActiveMQ to use the external address and port I wrote in the ping_data with JDBC_PING for the 'ee' jgroups channel.
Is it possible ?
Yes - you can configure activemq broadcast/discovery to share the same jgroups channel, which will then use the same JDBC_PING configuration.
<broadcast-group name="bg-group1" connectors="http-connector" jgroups-channel="activemq-cluster"/> <discovery-group name="dg-group1" refresh-timeout="1000" jgroups-channel="activemq-cluster"/>
I am already using this configuration :
<broadcast-group name="bg-group1" jgroups-channel="activemq-cluster" connectors="http-connector"/>
<discovery-group name="dg-group1" jgroups-channel="activemq-cluster"/>
"b9ce94d895f8" is the name of the Docker container and hostname for Wildfly.
So it seems it's not using the external address I configured with :
<transport type="TCP" socket-binding="jgroups-tcp">
Is there something else I can try ?
Why would you need to specify an "external_addr"? This is only needed when you have members on different networks, where the external_addr points to the address of the router, when the interface address to which the member is bound isn't visible to other members of the cluster.
Because my WF are started inside Docker Multicontainers (that's how Elastic Beanstalk scales out automatically) :
So for example Container 1 from Instance 1 contains Wildfly, and Container 1 from Instance 2 also contains Wildfly.
Instance 1 can send packets to Instance 2, but Containers from different instances are not in the same network and so they can't speak between them, so I have to use external address in JGroups TCP Protocol to enable them to reach each other.
You can find my JGroups configuration a few reply above ...
So basically every Wildfly is behind a different NAT/router (the host machine, the instance of EC2). External address is the address of EC2/instance for each wildfly.