0 Replies Latest reply on Jun 18, 2004 12:35 PM by David Budworth

    Question about general practice with HA services (mbeans)

    David Budworth Newbie

      Hi all,

      I've been making services for my company as dynamic mbeans that make themselves accessible to clients via a HARMI stub bound to the JNDI tree.

      I was thinking that, in order to make life simpler, as well as give us better control of method interception (as in mbean invoke() trapping), I'd like to have the clients just get a handle on the remote mbean server and generate a dynamic proxy for the interface to the service.

      The only problem here is that some of my services are on only specific hosts, so if I happen to be talking to one of the mbean servers that doesn't have the service registered, it will fail.

      So I guess my question is, is there a HAMBeanServer type mechanism looming about? One that when you call

      server.invoke(ObjName,Method,Param[],Sig[])

      it will actually invoke on a service that has the mbean avaliable?

      And better yet, a server with the service deplyed and STARTED (in the service mbean sense)

      I thought about making my own HARMI mbean server thingie that did just that, but since there's no way to directly call a method on one node in the cluster, that becomes difficult.
      I suppose I could do it by keeping a cache of remote mbean references for every node in the cluster.
      And when a request comes in for service 'X', he could just cluster call looking for who has the service up and started, then call that guys mbean server, sort of a proxy service.

      I was just wondering if this was already solved somewhere. As I've seen that JBoss cache seems to have solved my other big problem I spent quite some time solving of highly transactional shared state.
      Fyi, if you didn't know, DistributedState can not really be trusted due to the way it transfers state.
      When you stick 1GB (many small items) of data in dstate, it has to double the size to serialize it in to a byte[] to send out for a state transfer when a node joins. During that serialization, and modifications to dstate will cause concurrent mod exceptions, just making the sync fail and leaving you with an unbalanced cache. And in our case, we do 10->100 writes to the cache per second. So it just doesn't work.

      Ok, got a bit off track there..

      So back to my question, has this issue been resolved? Or was that solution to just bind each service via a HARMI stub to the JNDI tree?

      We've chosen to make all our services mbeans and use CMP/Hibernate for data access. Session beans seem to be the norm from reading the forums, but I'm really picky about look-and-feel in APIs. I don't think clients should have to 'create' their services to use them. It's much more clear to tell people to lookup a service and just use it, rather than jump through the hoops that session beans impose.