4 Replies Latest reply on Jan 19, 2009 10:28 AM by Grzegorz Klimek

    Deadlock in JDBCCacheLoader

    Grzegorz Klimek Newbie

      It is possible to have a deadlock in JDBCCacheLoader when concurrently trying to put two different entries into it. Timing has to be right of course and even then chances are small (like 1 to 16384). But still it is a problem.
      I have found it in version 1.4.1.SP8 but it is still a problem in 3.0.1.GA

      I have put together a unit test (for 3.0.1.GA) which demonstrates the issue: trying to put Fqns "/a/b" and "/c/d" at the same time is fine, but "/a/m" and "/l/p" is not

      import static org.easymock.EasyMock.*;
      import java.sql.Connection;
      import java.sql.DatabaseMetaData;
      import java.sql.PreparedStatement;
      import java.sql.ResultSet;
      import java.util.Hashtable;
      import java.util.Properties;
      import javax.naming.Context;
      import javax.naming.NamingException;
      import javax.naming.spi.InitialContextFactory;
      import javax.sql.DataSource;
      import junit.framework.TestCase;
      import org.easymock.Capture;
      import org.easymock.IAnswer;
      import org.jboss.cache.CacheSPI;
      import org.jboss.cache.Fqn;
      import org.jboss.cache.config.CacheLoaderConfig.IndividualCacheLoaderConfig;
      import org.jboss.cache.io.ByteBuffer;
      import org.jboss.cache.loader.JDBCCacheLoader;
      import org.jboss.cache.marshall.Marshaller;
      public class JDBCCacheLoaderTest extends TestCase {
       private static final long EXECUTE_QUERY_DELAY = 1000;
       private static final long MAX_TIME_FOR_PUT = 10000;
       private static Context context;
       public static class SimpleContextFactory implements InitialContextFactory {
       public Context getInitialContext(Hashtable<?, ?> environment) throws NamingException {
       return context;
       private JDBCCacheLoader jdbcCacheLoader;
       protected void setUp() throws Exception {
       System.setProperty("java.naming.factory.initial", "JDBCCacheLoaderTest$SimpleContextFactory");
       //JDBC mocks
       context = createNiceMock("context", Context.class);
       final CacheSPI cacheSPI = createNiceMock("cacheSPI", CacheSPI.class);
       final Marshaller marshaller = createNiceMock("marshaller", Marshaller.class);
       final DataSource dataSource = createNiceMock("dataSource", DataSource.class);
       final Connection connection = createNiceMock("connection", Connection.class);
       final DatabaseMetaData databaseMetaData = createNiceMock("databaseMetaData", DatabaseMetaData.class);
       final PreparedStatement preparedStatement = createNiceMock("preparedStatement", PreparedStatement.class);
       final ResultSet resultSet = createNiceMock("resultSet", ResultSet.class);
       makeThreadSafe(cacheSPI, true);
       makeThreadSafe(marshaller, true);
       makeThreadSafe(dataSource, true);
       makeThreadSafe(connection, true);
       makeThreadSafe(databaseMetaData, true);
       makeThreadSafe(preparedStatement, true);
       makeThreadSafe(resultSet, true);
       //mock expectations
       expect(marshaller.objectToBuffer(capture(new Capture<Object>()))).andStubReturn(new ByteBuffer(new byte[] {'a'}, 0, 1));
       expect(databaseMetaData.getDriverName()).andStubReturn("a driver name");
       expect(connection.prepareStatement(capture(new Capture<String>()))).andStubReturn(preparedStatement);
       expect(preparedStatement.executeQuery()).andStubAnswer(new IAnswer<ResultSet>() {
       public ResultSet answer() throws Throwable {
       return resultSet;
       replay(context, cacheSPI, marshaller, dataSource, connection, databaseMetaData, preparedStatement, resultSet);
       //JDBCCacheLoader init
       Properties props = new Properties();
       IndividualCacheLoaderConfig config = new IndividualCacheLoaderConfig();
       jdbcCacheLoader = new JDBCCacheLoader();
       public void testPutCaseDeadlock() throws Exception {
       Thread thread1 = new Thread(new Putter("/a/m"));
       Thread thread2 = new Thread(new Putter("/l/p"));
       assertEquals(Thread.State.TERMINATED, thread1.getState());
       assertEquals(Thread.State.TERMINATED, thread2.getState());
       public void testPutCaseNoDeadlock() throws Exception {
       Thread thread1 = new Thread(new Putter("/a/b"));
       Thread thread2 = new Thread(new Putter("/c/d"));
       assertEquals(Thread.State.TERMINATED, thread1.getState());
       assertEquals(Thread.State.TERMINATED, thread2.getState());
       private class Putter implements Runnable {
       private String name;
       public Putter(String name) {
       this.name = name;
       public void run() {
       try {
       jdbcCacheLoader.put(Fqn.fromString(name), "a key", "a value");
       } catch (Exception e) {

        • 1. Re: Deadlock in JDBCCacheLoader
          Manik Surtani Master

          This has to do with the striped locks used by the JDBCCacheLoader, and the fact that the 2 fqns you have generate hashcodes that collide, and hence map to the same shared lock.

          There isn't a mechanism of increasing the concurrency level at the moment (by increasing the number of shared locks), although this would probably be something useful to be able to configure. Feel free to create a JIRA on this.


          • 2. Re: Deadlock in JDBCCacheLoader
            Manik Surtani Master

            See line 72 on AdjListJDBCCacheLoader for details.

            • 3. Re: Deadlock in JDBCCacheLoader
              Grzegorz Klimek Newbie

              Actually the 2 fqns' hashcodes do not map to the same shared lock.
              Let's say that "/a/m" maps to lock X and "/l/p" maps to lock Y. But then "/a" maps to lock Y and "/l" maps to lock X.
              JDBCCacheLoader when executing put() acquires at least two locks at a time. First it is lock for the fqn itself (this one is in exclusive mode) and then it tries to acquire lock for the fqn's parent when it calls exist().
              Generally when you have a scenario where you attempt to acquire 2 locks you have two be sure that there is no concurrent scenario where the locks are acquired in reverse order. Otherwise you might experience a deadlock. Once it happens you will have rest of your locks in deadlock quickly.

              I have to admit chances of this are small but I have experienced a production server going down because of this. Since the first deadlock happened the server was not responding to certain requests and then in the next 2 hours was not responding at all. The number of deadlocked threads was growing like a snow ball and eventually all of them were locked.

              A solution I see for this issue would be to maintain one StripedLock instance per fqn size and review the code to make sure that whenever you acquire 2 locks you start from child and then go to parent (you also have to be careful about the exclusive mode).


              • 4. Re: Deadlock in JDBCCacheLoader
                Grzegorz Klimek Newbie

                As advised: I have created https://jira.jboss.org/jira/browse/JBCACHE-1466 for the issue.