Kal,

I think that means you have two "classes of service" and each user session connects to one of the two classes. That's why you initially partitioned the system into two separate subsystems. Each client would connect to one LS Server or to the other based on the client application running. Did I get it right?

By introducing fail-over, you had to mount both the data sources on both the LS Server instances, resulting in each machine handling all the data sources.

The simplest solution to avoid that would be to attach again both the LS Servers to both the data sources, but having clients of class "A" connecting only to the LS Server "A", and clients of class "B" connecting only to the LS Server "B". Only if one node breaks, all the clients (A and B) should connect to the other node. This means that a physical node should have to handle all the traffic only in case of emergency.

To achieve that, you should configure your load balancer as in option A.1 (see Clustering.pdf) as a starting point. Then you should add two more VIPs, say pusha.mycompany.com and pushb.mycompany.com. The load balancing algorithm should be configured like this: requests for pusha.mycompany.com should always go to LS Server 1, unless LS Server 1 is down; requests for pushb.mycompany.com should always go to LS Server 2, unless LS Server 2 is down. How to do that is very dependent on the load balancing appliance.

Hope that helps.