0 Replies Latest reply on Sep 27, 2012 12:32 AM by Chris Corbell

    JNDI/LDAP TCP dynamic port 'leak' in JBoss 4.0.5 GA

    Chris Corbell Newbie

      I am seeing some behavior I do not expect and have not found reference to anything like it online, I am just wondering if anyone else has seen this and whether it is configuration issue or maybe an issue that has been addressed.

       

      In a nutshell a JNDI LDAP collection is "leaking" a TCP dynamic port with status TIME_WAIT for every invocation of a context search() call (an LDAP query). This is not using a different context instance - using the same open context instance in the same thread, repeated calls to search() trigger these leaks. I am concerned that in large-scale environments this is going to be a serious issue if available dynamic sockets get all consumed under heavy load - I want to see a single JNDI LDAP connection just use a single TCP dynamic port for the life of that context and not throw off hundreds of TIME_WAIT usages consuming other dynamic ports (even if it is temporary).

       

      My server application uses stateless EJB's for client requests and also has Quartz jobs running for some ongoing background work including periodic LDAP synchronization (getting data for users/groups via LDAP queries).

       

      The background job creates one JNDI ldap context (per periodic execution) that it uses for LDAP queries. I can see the connection in netstat:

       

      $ netstat -p tcp | grep ldap

      tcp4       0      0  10.1.6.22.55091        mozz.mtest.exten.ldap  ESTABLISHED

       

      In a particular spot in the code, a JNDI context used for LDAP queries does a lot of search() calls in a loop (varying the search filter for each call) and suddenly consume a different dynamic TCP port for each query, leaving the newly-consumed port dangling with a TIME_WAIT status. I can't see anything about this particular query or how we are using the JNDI context search() method at that spot that would cause this, but the netstat output explodes:

       

      $ netstat -p tcp | grep ldap

      tcp4       0      0  10.1.6.22.55574        mozz.mtest.exten.ldap  ESTABLISHED

      tcp4       0      0  10.1.6.22.55117        mozz.mtest.exten.ldap  TIME_WAIT

      tcp4       0      0  10.1.6.22.55116        mozz.mtest.exten.ldap  TIME_WAIT

      tcp4       0      0  10.1.6.22.55115        mozz.mtest.exten.ldap  TIME_WAIT

      tcp4       0      0  10.1.6.22.55114        mozz.mtest.exten.ldap  TIME_WAIT

      tcp4       0      0  10.1.6.22.55112        mozz.mtest.exten.ldap  TIME_WAIT

      tcp4       0      0  10.1.6.22.55091        mozz.mtest.exten.ldap  TIME_WAIT

      tcp4       0      0  10.1.6.22.55090        mozz.mtest.exten.ldap  TIME_WAIT

      tcp4       0      0  10.1.6.22.55180        mozz.mtest.exten.ldap  TIME_WAIT

      tcp4       0      0  10.1.6.22.55179        mozz.mtest.exten.ldap  TIME_WAIT

      tcp4       0      0  10.1.6.22.55178        mozz.mtest.exten.ldap  TIME_WAIT

      tcp4       0      0  10.1.6.22.55177        mozz.mtest.exten.ldap  TIME_WAIT

      (.... there are many many more)

       

      I've investigated enabling and changing the settings for JNDI LDAP connection pooling, making extra sure when contexts were created and closed, comparing the search() call that is triggering this with the many other search() calls my application makes which usually don't misbehave. This seems like something lower-level than parameters to the search call or JNDI configuration, and I can't think of a logical cause apart from it just being a bug.

       

      I've seen this same behavior running both under Windows 2008 R2 and Mac OS 10.6; I do not see it however under Mac OS 10.7.

       

      Has anyone seen this before? Any recommendations/workarounds are appreciated.

       

      - Chris