Yes, this will be a problem when you have a failover. Since this is only a problem when you are using mod_jk for front end load balancing without cookie, this is currently a lower priority. But if you are willing to contribute a patch, I can take a look at it.
After I read your response I thought about the issues (as I understood them) for quite awhile.
My first instinct was to extend JVMRouteFilter to wrap the request on the way in and replace the old jvmRoute with the current servicing node's jvmRoute much like you do for cookies.
Unfortunately the exceptions and corner cases that would have to be accounted for are many. For instance, this would only work if your load balancer used persistence to forward to the same Apache instance, which mine does, but some may not. As well each the Apache would have to be configured to favour the JBoss instance marked as being a local node to make sure that the same JBoss received the request each time. This of course assumes that requests may get launched from the client using stale jvmRoutes even after the migration took place due to unrefreshed content containing old jvmRoute tags. Cookie based migration, of course is not susceptible to this since the cookie gets replaced, period.
Another issue is that if the JBoss marked as local to an Apache node is the one that is down, then the Apache instance must also be removed from the load balancers pool. After all, having a predictable JBoss node selection is required to work around the content with stale jvmRoutes.
The last issue is, what happens when the JBoss instance comes back up? If all the content properly migrated and refreshed, then everything is fine. On the other hand if there was non-refreshing content with the jvmRoute, then it would now get forwarded to the original JBoss. Possibly resulting in 2 different JBoss instances handling different parts of the same page. Bleh.
At this point, I am wondering why we are even attempting to migrate the sessions at all? I noticed that the TreeCache strips the jvmRoute's before they are inserted and replicated anyway. Is there still a local cache copy that gets used instead of the serialized version in the TreeCache and that is why the migration is being attempted? If so, why not just let the URL based sessions 'fall through' and take the performance hit only for the rare cases when a node is taken down and the session is not in the local cache. Is this feasible?