0 Replies Latest reply on Jul 9, 2013 8:34 AM by Valentin Podlovchenko

    Clustered singleton service vs. better application architecture

    Valentin Podlovchenko Newbie

      Hello to everybody!

       

      I've got some problems with clustered singleton service in JBoss AS 7. Some of them fixed, some not (see https://community.jboss.org/thread/196934).

       

      What do I think... If there are so many problems to make it running and this way has performance drawbacks (using service is much longer than calling @Singleton bean even on the same node)... may be it's a wrong way and I've to change my application architecture?

       

      I've two applications that I think have to use singleton services (both applications use Hibernate JPA):

       

      1. "LocationService" problem:

       

      I've large (~ 15 000 records) table with station locations (say "station_table"), very large (~ 300 000 records) values table ("values_table") related to "station_table" (each record has it's station). My application needs to get lists of stations used in "values_table" and it needs to get (for user interface) used stations beggining with 'a', with 'al', with 'ala', 'b', 'be', 'bel' and so on... It's very long operation (minutes) to run this query on joined "station_table" and "values_table", so we have "artificial" (not SQL "normilized") table ("locations_table") where we track all used stations (when we add new value and there no such station in "locations_table" we add it, when we remove value and the're no more such stations in "values_table" we remove station from "locations_table"). Using queries over "location_table" we get listings in milliseconds. It seems that updates to "locations_table" needs to by synchronized - it's not applicable when two diffirent beans try to add the same station into "locations_table" (station index must be unique). When we run in not clustered environment, @Singleton bean do this job well. How can I do this in clustered environment without using singleton service? May be the bean that adds and checks stations "LocationsBean" is not correct from JPA architecture point of view?

       

      Code snippet:

       

      @Singleton
      public class LocationService {
      
      
                @PersistenceContext
                private EntityManager em;
      
      
                @TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
                public void clearLocations() {
                          Query query = em.createNativeQuery("delete from locations where index not in " +
                                              "(select location_index from weather_values)");
                          query.executeUpdate();
                }
      
      
                @TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
                public void insertLocation(T1Meteo location) {
                          Query q = em.createNativeQuery("select count(1) from locations where index = :index");
                          q.setParameter("index", location.getIndex());
                          BigInteger count = (BigInteger) q.getSingleResult();
                          Location loc = null;
                          if (count.intValue() == 0) {
                                    loc = new Location();
                                    loc.setIndex(location.getIndex());
                                    loc.setName(location.getRU());
                                    em.persist(loc);
                          }
                }
      }
      
      

       

       

      We use REQUIRES_NEW because transactions inserting values can be quite long (up to 1000 values records for one transaction, running up to few minutes in some systems). If we hold for such a long time locaions_table, it'll block other beans to get data from it.

       

      If we do this work not in new transaction scope and not in a singleton bean - we got a lot of "unique key violations" exceptions.

       

      Is there some problems with wrong JPA usage? Or we have to try force transactions to be shorter (one transaction for one record)?

       

      2. "UserSessionService" problem

       

      We have EJB3 client-server application, where the client is SWT application connected to JBoss AS. We need to hold user sessions information in central point (keep list of connected users and locks for open documents). As in the previous example in not clustered environment this job is done well by simple @Singleton bean, see code snippet:

       

       

      @Singleton
      public class ServerSessionService {
      
      
                private Map<User, Date> users;
                private Map<Frame, ConcurrentHashMap<User, Mode>> locks;
      
      
          public ServerSessionService() {
                    users = new ConcurrentHashMap<User, Date>();
                    locks = new ConcurrentHashMap<Frame, ConcurrentHashMap<User, Mode>>();
          }
      
      
                public boolean login(User user) {
                    log.info("login(" + user + ") " + this.users);
                          if (!users.keySet().contains(user)) {
                                    users.put(user, new Date());
                                    return true;
                          }
                          return false;
                }
      
      
                public void logout(User user) {
                    log.info("logout(" + user + ") " + this.users);
                          if (users.keySet().contains(user))
                                    users.remove(user);
                          for (Frame frame: locks.keySet()) {
                                    if (locks.get(frame).containsKey(user)) {
                                              locks.get(frame).remove(user);
                                              if (locks.get(frame).size() == 0)
                                                        locks.remove(frame);
                                    }
                          }
                }
      
      
                public FrameLock[] getLocks(Frame frame) {
                          log.info("getLocks(" + frame + ") " + this.users);
                          if (locks.containsKey(frame)) {
                                    FrameLock[] result = new FrameLock[locks.get(frame).size()];
                                    int i = 0;
                                    for (User user: locks.get(frame).keySet())
                                              result[i++] = new FrameLock(user, locks.get(frame).get(user));
                                    return result;
                          }
                          return new FrameLock[0];
                }
      
      
                public boolean lock(User user, Frame frame, Mode mode) {
                          log.info("lock(" + user + ", " + frame + ", " + mode + ") " + this.users);
                          if (mode == Mode.READ) {
                                    if (locks.containsKey(frame)) {
                                              if (locks.get(frame).values().contains(Mode.WRITE))
                                                        return false;
                                              locks.get(frame).put(user, mode);
                                    } else {
                                              ConcurrentHashMap<User, Mode> map = new ConcurrentHashMap<User, Mode>();
                                              map.put(user, mode);
                                              locks.put(frame, map);
                                    }
                          } else {
                                    if (locks.containsKey(frame)) {
                                              boolean result = true;
                                              Enumeration<User> keys = locks.get(frame).keys(); 
                                              while (keys.hasMoreElements()) {
                                                        if (!keys.nextElement().equals(user)) {
                                                                  result = false;
                                                                  break;
                                                        }
                                              }
                                              if (result)
                                                        locks.get(frame).put(user, Mode.WRITE);
                                              return result;
                                    }
                                    ConcurrentHashMap<User, Mode> map = new ConcurrentHashMap<User, Mode>();
                                    map.put(user, mode);
                                    locks.put(frame, map);
                          }
                          return true;
                }
      
      
                public List<SessionDescriptor> listUsers() {
                          log.info("listUsers() " + this.users);
                          List<SessionDescriptor> result = new ArrayList<SessionDescriptor>();
                          for (User u: users.keySet())
                                    result.add(new SessionDescriptor(u, users.get(u)));
                          return result;
                }
      
      
                public void unlock(User currentUser, Frame frame) {
                          log.info("unlock(" + currentUser + ", " + frame + ")");
                          if (locks.containsKey(frame) && locks.get(frame).containsKey(currentUser)) {
                                    locks.get(frame).remove(currentUser);
                                    if (locks.get(frame).size() == 0)
                                              locks.remove(frame);
                          }
                }
      
      
                public void closeSession(SessionDescriptor sessionDescriptor) {
                          logout(sessionDescriptor.getUser());
                }
      
      
                public boolean hasSession(User user) {
                          return users.containsKey(user);
                }
      
      }
      
      

       

      but how could I do the same in clustered environment? What application architecture could be better for this tasks?


      Any help would be very appreciated)))