Results 1 to 9 of 9
  1. #1

    Question Issues with simulating many clients from the same machine


    I am running some tests simulating a large number of client connections from the same machine to a LightStreamer server on another machine (300 connections on each machine, maximum is 3000 connections).

    Using the .NET client API, it looks like each connection made from the client machine to server uses around 4 threads. Running 300 clients on the same machine will there take up around 1200 threads - all of them switching and competing for processor cycles.

    Furthermore, in this setup (with private messages being delivered to each client 10 times/second up to 50/times second), I get out of memory exceptions in the log (on the client), even when having 300 clients with 10 updates/second (3000 messages/second from server to client machine).

    Memory consumption on the client machine seems to be ok and CPU utilization is fairly low, BUT we have spikes of 100% CPU usage and when this happens, the CPU usage stays there - and the clients do not seem to get anymore messages from the server.

    Any comments or ideas why we see this?

  2. #2
    Join Date
    Feb 2012
    The .NET client library is not been designed for working in such these conditions and has no optimizations focused to achieve such a number of concurrent clients on the same machine.
    I think that the problems of CPU and memory are due to the many instances in competition on the same machine rather than the absolute number of messages received per second.

    For the specific purpose of the load/stress tests we developed internally a Load Test Toolkit (LLTT), that is available to developers on demand. It addresses the problem of client side scalability with a client based on a Lightstreamer JAVA library provided with specific optimizations.

    The LLTT is made up of an Adapter Simulator and a Client Simulator, this means it is quite useful in the preliminary phases of a project, when no Adapters and Clients have been developed yet, but it is necessary to do a capacity planning of the system based on different load scenarios.

    But if you want to use your own Adapters and since the LLTT is provided with full source code you might consider the option to change the JAVA client of the toolkit to cope with your server configuration.

    Please let us know if you are interested in the LLTT.

  3. #3
    Thanks for the info.

    I already started using the LLTT framework this morning - and saw a pretty big difference on especially the client machine (1000 connections each with 1 subscription changing every 20 ms seemed to be possible). Machine stats with regards to thread count looked much better.

    I had been looking for the source - but failed to spot the src folder! I found it now - and that helps a lot.

    Thing is that we are testing several streaming frameworks and built a framework for doing that (entirely in .NET), but I think we might have to implement the client part in JAVA to make sure it works. However, I did assume that the LSClient implementation in .NET was a direct port of the JAVA version, but that is apparently not the case.

    I will post findings here in case we choose to go down the path of implementing the client in JAVA.

  4. #4
    Join Date
    Feb 2012
    I confirm you that the Lightstreamer .NET client library was born as a porting of the Java client library with the addition of few specific implementations.

    But beyond to this, the test java client exploits a couple of tricks specific of the library version included in the LLTT.

  5. #5
    Ok, so I have tried to modify the client simulator in your LLTT to use our own .NET adapter on the LS server (communicating with LS through TCP sockets).

    In this setup I was able to run 1000 clients, each receiving 50 private messages/second (e.g. throughput is 50.000 updates/second) on my local developer machine with both the client and server running on that box.

    Based on this test, I concluded that the LS server (and .NET adapter) could easily handle the load and publish the messages to the client.

    I then moved to the test environment (a single machine setup as server and anoter machine running as client). The machines are on the same Windows domain with very low latency.

    In this case the test was quite different. Running with 10 clients the numbers were fine, but running with 100 clients something interesting happened.

    The LS monitor reported that throughput was approximately 5000 updates/second (each update is 100 bytes), but the throughput in kbit/s was only 1700 kbits/s, where it should really be 5000 * 100 * 8 / 1024 = 3900 kbits/s.

    I don't know how to explain this - the configuration for LS is the same on my local machine and the server - but for some reason LS believes that it is indeed sending the required number of updates - but the payload is significantly smaller (less than half).

    This leaves me thinking that something is being filtered out somewhere before delivering content to the client (both server and client machine seems to be healthy at this point, based on performance counters).

  6. #6
    Join Date
    Feb 2012
    I have a couple of ideas on the reason for the discrepancy in the value of throughput compared to what you expected.

    The first concerns the intervention of the "delta delivery" algorithm in the case two consecutive updates have fields with the same value. Can this scenario apply to your .NET adapters?
    The other is the possibility that you have a setting of <max_delay_millis> higher than the frequency with which you push updates.
    Neither, however, would justify a difference in behavior in the case with 10 clients compared to the case with 100 clients.

    Please, can you confirm us the value of the <max_delay_millis> parameter in use? And for the <delta_delivery> one?
    If you prefer, send us the entire configuration file ("lightstreamer_conf.xml") in use in your test server and the server log of your tests with the logger ="LightstreamerMonitorText" set to TRACE level. You can also mail them to us at the support address.

    Thank you.

  7. #7
    Ok, so I ran another test with the same setup: one machine as server - another as client (with 1000 clients, having 1 subscription to a private item, which updates 50 times a second). The messages sent over the wire are 100 characters of e.g. "AAAAAA....", "BBBBB....", "CCCCC...." etc. until "ZZZZ ....." and then it rolls over to "AAAA...." Again.

    Now, running this test on our test environment (domain x), produces the result I wrote about earlier - 50.000 updates/second, but a much too low kbit/s rate).

    However, running the exact same test on domain y, produces correct results (50.000 updates/s, approx 4500kbit/s).

    Test one is running on Windows Server 2012 (LS server) and Windows Server 2008 R2 (clients), where the test with the good results are run on 2 Windows 8 machines.

    But to answer your questions:

    Delta-delivery is turned off. And even with the setting on, two consecutive messages should not be identical.

    Both servers configuration files are the same, with:


    I will send config file + log file + screenshot of monitor console to the support email address.
    Last edited by cwt237; September 30th, 2013 at 09:33 AM.

  8. #8
    Join Date
    Feb 2012
    We have checked the log files sent to our support email address.

    The discrepancies in the expected figures seem to be due to the intervention of the delta delivery. Something similar to what happened here.

  9. #9
    As a work around I can create unique messages per client per push. This will make the updates/second correlate nicely with the kibt/second number.




Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
All times are GMT +1. The time now is 09:40 AM.