What's the JRE vendor and version running on that Azure system? Can you post the output of java -version from there?
That is really odd.
you could force a selector by passing -Dxnio.nio.selector.provider=<name-of-selector>
but without knowing platform / jdk it is hard to say what selector should be forced.
JRE should be fine, JAVA_HOME yields the expected 1.8_xxx and I also tried the versions of different vendors which are available on Azure.
I've also set the IP4Stack Argument, that's a requirement by Azure Platform.
I implemented a custom Version of XNIO to catch the actual exception:
2017-01-13 09:08:49,244 INFO [org.jboss.gravia.runtime] (MSC service thread 1-2) Started: Module[gravia-container-wildfly-extension:1.3.1]
2017-01-13 09:08:49,275 INFO [org.wildfly.extension.camel] (MSC service thread 1-2) Bound camel naming object: java:jboss/camel/CamelContextFactory
2017-01-13 09:08:49,291 INFO [org.wildfly.extension.camel] (MSC service thread 1-2) Bound camel naming object: java:jboss/camel/CamelContextRegistry
2017-01-13 09:08:49,634 INFO [org.xnio.nio] (MSC service thread 1-1)
java.io.IOException: Unable to establish loopback connection
... nio stack ...
Caused by: java.net.SocketException: Address family not supported by protocol family: bind
at sun.nio.ch.Net.bind0(Native Method)
... 28 more
From the Azure Sandbox Wiki:
The only way an application can be accessed via the internet is through the already-exposed HTTP (
80) and HTTPS (
443) TCP ports; applications may not listen on other ports for packets arriving from the internet.
However, applications may create a socket which can listen for connections from within the sandbox. For example, two processes within the same app may communicate with one another via TCP sockets; connection attempts incoming from outside the sandbox, albeit they be on the same machine, will fail. See the next topic for additional detail.
Connection attempts to local addresses (e.g.
127.0.0.1) and the machine's own IP will fail, except if another process in the same sandbox has created a listening socket on the destination port.
Rejected connection attempts, such as the following example which attempts to connect to
127.0.0.1:80, from .NET will result in the following exception:
So my guess is that XNIO tries to bind against some default like %machine_name% (Public Address) which would be disallowed by the Azure Sandbox.
I will investigate further.
Perhaps someone has any hints on how to set something like an "XNIO Bind Address"?
"-D swarm.bind.address=localhost" did not fix the issue.
1 of 1 people found this helpful
if you think only that is the problem, just force wildfly to bind to different ip.
when you run it add -b 0.0.0.0 or whatever ip you want.
so standalone.sh / .bat -b <ip-to-bind-to>
similar for binding management interface use -bmanagment <ip>
Another option is to edit standalone.xml under section <interface>
thanks for the input, I tried both -b 127.0.0.1 and -Dswarm.bind.address=127.0.0.1
XNIO still fails on the socket binding, so I guess its not related to the bind address and more of a port problem.
Is it using IPv6 to bind the socket? If so, try forcing the JVM to use IPv4 by setting
-Djava.net.preferIPv4Stack=true system property
Hi jaikiran pai,
i have already made sure to pass the IPv4Stack Argument to the Application. Its shown in the trace.log above:
2017-01-10 02:24:00,678 INFO [org.wildfly.swarm] (MSC service thread 1-1) WFSWARM0029: Install MSC service for command line args: [-Dswarm.logging=TRACE, -Dswarm.http.port=11815, -Djava.net.preferIPv4Stack=true]
I will answer my own question:
My guess is that a name change of the originally deployed war (eg dummy1-swarm.jar -> dummy2-swarm.jar) lead to the port being blocked, even after I killed the process.
Starting dummy2-swarm.jar or for that reason the demo-swarm.jar from above then lead to the exception described in this thread.
A clean restart of the VM trough the Azure Portal helped in my case.