I am designing a JCA adapter that needs to support connections to one or more instances of the same EIS (legacy system). The EIS's are currently stateless and read-only, so affinity is not an issue (yet!). However each connection to the EIS involves establishing a TCP-IP socket connection, an activity that I would rather limit if at all possible.
Connections to each instance of the EIS have a "classification" in that certain instances would be used to service certain types of requests (typically satisfying long-running requests on different servers to short-running, but also for different data models, etc).
Currently my ManagedConnection implementation creates each connection when required based on the ConnectionRequestInfo passed from the client.
The problem is that once a connection to a particular EIS instance has been established, I would like the connection to remain after it has been returned to the pool so that the next client connection does not have to wait for the physical TCP-IP connection to be re-established.
This approach works until I add in the logic regarding the classification, and namely the matchManagedConnection functionality. If the connections in the pool are not of the correct classification, they are destroyed.
Is there any way to override this behaviour such that non-matching connections are returned to the pool, rather than destroyed? However, I can see that this is not the ideal approach, since the pool is then never reduced. If I suddenly get a "spike" of 100 clients requesting connections, I will be left with at least 100 active connections to my EIS remaining in the pool, potentially never used again. The ConnectionManager clearly does not know what strategy to use to "trim" the pool down to size (when to destroy and when to return). Maybe the matchManagedConnection could be used in some way to allow the developer to analyze the connection, determine if it is a match (current behaviour), if no match - determine if it should be kept, and return a flag to indicate whether the connection should be destroyed or not.
Is it a valid a potential work-around to extend the ManagedConnectionFactory to become a secondary pool for connections, such that each ManagedConnection actually receives a real "physical" connection (tcp-ip connection to EIS) from the ManagedConnectionFactory? It would then be the responsibility of the ManagedConnectionFactory to maintain collection of "real" connections, and hence manage the breadth and depth of the "physical" connection pool (by classification), such that certain threshold quantities of each classification of connection can be maintained and allocated when required? This clearly bypasses the whole Connection Pooling aspect of the Application Server, hence makes me nervious - but is there another way?
Or is this very wrong? :)