Ok, so I have tried to modify the client simulator in your LLTT to use our own .NET adapter on the LS server (communicating with LS through TCP sockets).

In this setup I was able to run 1000 clients, each receiving 50 private messages/second (e.g. throughput is 50.000 updates/second) on my local developer machine with both the client and server running on that box.

Based on this test, I concluded that the LS server (and .NET adapter) could easily handle the load and publish the messages to the client.

I then moved to the test environment (a single machine setup as server and anoter machine running as client). The machines are on the same Windows domain with very low latency.

In this case the test was quite different. Running with 10 clients the numbers were fine, but running with 100 clients something interesting happened.

The LS monitor reported that throughput was approximately 5000 updates/second (each update is 100 bytes), but the throughput in kbit/s was only 1700 kbits/s, where it should really be 5000 * 100 * 8 / 1024 = 3900 kbits/s.

I don't know how to explain this - the configuration for LS is the same on my local machine and the server - but for some reason LS believes that it is indeed sending the required number of updates - but the payload is significantly smaller (less than half).

This leaves me thinking that something is being filtered out somewhere before delivering content to the client (both server and client machine seems to be healthy at this point, based on performance counters).