-
Load balancing
We are planning to have 2 LS servers (hardware) load balanced and there will be 2 data feeders connected to these servers through ARI.
Although the client sessions will be load balanced through LS servers, we do not see the data feeders will be. Because at some point, both feeders will be serving all the requests coming from the clients. Having 2 feeders will help in the fail-over situation but we see an issue in scaling.
Are there any suggestions from Lightstreamer?
Regards,
Kal
-
Hi Kal,
If I understood correctly, you have the following architecture:
Code:
Clients <---> | LOAD | <--> LS Server 1 <---> Feeder 1
| |
Clients <---> | BALANCER | <--> LS Server 2 <---> Feeder 2
The two feeders deliver the same data, based on the subscriptions each node is handling. If Feeder 1 fails, LS Server 1 fails too and the current traffic is routed to LS Server 2.
I would expect that the global set of items that are subscribed to, after some time the system is running, is the same on LS Server 1 and LS Server 2. That means that if all the traffic is routed to one node, that Feeder will not have an increased load.
Is that what you meant?
Cheers,
Alessandro
-
Hi Alessandro,
You got our architecture setup correctly.
I am sure I didn't explain our data feeder setup properly. We initially planned to have each data feeder serve only a specific data source. As one data feeder is tied to one LS server, we did not know how to direct the client requests to a specific LS through the load balancer.
But looking at the fail-over situation, it makes sense to have all the feeders aware of all the data sources. The question is when we add more data sources and views, we are going to keep increasing the load on the data feeders and the LS kernel and if there is a way to avoid it.
Thanks,
Kal
-
Kal,
I think that means you have two "classes of service" and each user session connects to one of the two classes. That's why you initially partitioned the system into two separate subsystems. Each client would connect to one LS Server or to the other based on the client application running. Did I get it right?
By introducing fail-over, you had to mount both the data sources on both the LS Server instances, resulting in each machine handling all the data sources.
The simplest solution to avoid that would be to attach again both the LS Servers to both the data sources, but having clients of class "A" connecting only to the LS Server "A", and clients of class "B" connecting only to the LS Server "B". Only if one node breaks, all the clients (A and B) should connect to the other node. This means that a physical node should have to handle all the traffic only in case of emergency.
To achieve that, you should configure your load balancer as in option A.1 (see Clustering.pdf) as a starting point. Then you should add two more VIPs, say pusha.mycompany.com and pushb.mycompany.com. The load balancing algorithm should be configured like this: requests for pusha.mycompany.com should always go to LS Server 1, unless LS Server 1 is down; requests for pushb.mycompany.com should always go to LS Server 2, unless LS Server 2 is down. How to do that is very dependent on the load balancing appliance.
Hope that helps.
-
Thanks Alessandro!
That is what we are thinking too. i.e. adding more clusters for different data stores and directing clients to different clusters depending on the user type.
-Kal