One difference between option #1 and option #2/3 ... is the "coupling". So in option #1 ... the coupling occurs when the Docker container (containing both the agent and the middleware product) is made at release time. This seems a tight coupling. Nothing wrong, necessarily with that ... but the situation gets a bit more complex when the agent and the middleware product are on different release schedules ...compatibility ... testing ...and the teamwork required by 2 independent teams. Still do-able ... but if there is a then portfolio of 10 middleware products ... then complexity from this tight coupling might be difficult. Compared to option #2/3 .... where the coupling between agent and middleware product is in the pod definition ...which is an easily editable json file....so this is a much looser coupling...and might be less problematic from a coupling perspective.
I guess the answer would depend on what you want to monitor/manage.
Option 1: If the agent is within the container, you can do anything because agent and Wfly are "local" to each other.
Option 2: If the agent is in the same pod, it shares the IP address with Wfly but NOT storage (necessarily). This might mean that file-based monitoring/management might not be possible (unless the files in question are on a shared volume).
Option 3: You can monitor/manage remotely and might be able to use docker to "look inside" the containers (using docker-exec, docker-top and the like).
Also, you are increasing the number of stuff the agent is responsible for with each option so there is a scalability concern to this, too.
Thanks for the insight on option 3, I don't know what's possible exactly. Final answer is probably a mix.
The other thing to consider is who would manage/install the agent, for option 1 et 2 it can be installed by the owner of the application but option 3 is managed by the owner of the infrastructure.
I have been experimenting with Docker and most recently Kubernetes so I thought I should chime in here.
In light of recent development in container-based OS such as Project Atomic and Core OS perhaps we can come up with a "super agent mode" that can provide management support for the host/bare-metal platform (infrastructure) as well as its container children (applications) as a first class citizen. Something like rhq-agent -> cadvisor -> containers. This super mode works well for private/on-premise installation where you have full control of the host.
The next flavor, "cloud mode" is suitable for AWS or GCE users where they don't have access to the host/underlying infrastructure. The agent runs by itself in its own container which can then manage other resources such as providing basic container monitoring via cadvisor, or middleware apps via plugins by means of inter-container networking. File based monitoring is now more of a responsibility of the cloud providers which in a perfect world they should already have super rhqagent running.
Thank you Viet, that's helpful.
We need to figure out how to "expose" the Wildfly metrics (or any other MW piece) to the "super agent"
Right, that's the part I don't see right now unless an agent runs in the container as in option 1. It seems cadvisor could give us monitoring of the machine and the containers but no exposure to what's actually being run in the container. It seems something needs to report from the container to the outside world.
Does the RHQ JBoss/Wildfly plugin communicate with Wildfly management interface through localhost:9999? If so then option #2 should work.