Results 1 to 10 of 10
  1. #1
    Member
    Join Date
    May 2012
    Location
    Kourites
    Posts
    6

    Websockets message throughput limit?

    Hi,

    I would like to ask if the lightstreamer vivace server has any message throughput limit by default and if so, where can I increase it. I am testing with a drawing pad example using html 5 canvas. This means that a large amount of messages are being sent per second in relation to the x,y cursor movements, to the server and pushed back to the listening client (only 1 client in this case). The problem with this is that a good amount of these messages are being ignored and not pushed to the listener even though I am using RAW subscription.

    I suspect this is not only with Colosseo's new websockets but also with polling and streaming as well.

    I replicated the application and used a custom Ajax polling every 33ms and I got a far better result with regards to how many x,y coordinates were received. Any suggestions as to why is this happening?

    Thanks

  2. #2
    Administrator
    Join Date
    Jul 2006
    Location
    Milan
    Posts
    1,090
    The Vivace Server doesn't enforce any limit on the update frequency on any single item in RAW mode, so only resource limits could come into play.
    However, in the factory configuration, the Server is not tuned for high frequency, but rather for more efficiency, because of the <max_delay_millis> setting of 200 ms.
    With this setting, even in RAW mode, for which no filtering is allowed, the Server tries to send data to the client once every 200 ms, then sending all events cumulated over that time all together.

    I suppose that this behavior makes you assume that the Server actually filters the updates, which should not happen in RAW mode.
    Please try setting <max_delay_millis> to 0 and see what happens.
    Anyway, if you have any clear evidence that updates get filtered, we will have to investigate further.

  3. #3
    Member
    Join Date
    May 2012
    Location
    Kourites
    Posts
    6
    Thanks for your tips, the "<max_delay_millis>" did in fact work and also I set <max_buffer_size> to 0 to achieve what I want.

    You are right, the lightstreamer server seems to be configured to scale with the number of connections which is very good, but what I am trying to do here is scale with the number of messages per second. It is now working very well with websockets and I could achieve up to 800 messages/second.

    I am trying to do this with polling as well although I know it is probably not possible or not healthy to achieve high polling requests per second but I want to find the limit. Any ideas or tips on server configuration to achieve this with polling? Thanks

  4. #4
    Administrator
    Join Date
    Jul 2006
    Location
    Milan
    Posts
    1,090
    Good. In fact, in our internal tests with the JavaScript Client Library, a test page running locally and just reading the updates (with no DOM operations) could reach many thousands of updates per second, on Chrome.

    By the way, if I understand correctly, in your tests you wanted to enforce filtering on events that couldn't be forwarded immediately and this is why you set <max_buffer_size> to 0.
    Actually, this is not the canonical way to do this, because this setting affects all items.
    You could, alternatively, use MERGE or DISTINCT mode for your item instead of RAW mode and set the buffer size to 0 for that specific item (if the item is in MERGE mode, this is the default).

    In fact, in a test like yours, by setting the buffer size at 0, you avoid that updates are enqueued in the Server, while waiting for the client and/or network to handle the updates already sent.
    Note that in this case, queueing can also occur at TCP buffer level. Lightstreamer does its best to reduce enqueueing in the TCP buffers, but it has not full control, particularly on the browser side.
    With polling, this internal queueing is eliminated.

    Are you interested in polling in order to cope with the cases in which the infrastructure doesn't allow streaming or in order to reduce the internal queueing, as said above?
    In the latter case, polling over websockets would provide the best performances;
    in the former case, you should use polling over http, as websockets would be unavailable in a similar scenario.
    You can use setForcedTransport for this setting.

    The default client and Server configuration for polling is already adequate, as it does not introduce any pause (apart from <max_delay_millis>).

  5. #5
    Member
    Join Date
    May 2012
    Location
    Kourites
    Posts
    6
    Right now I am able to push for example 800 msgs/s to a client using WS-STREAMING which is Ok.

    What I want to test now is, if the client does not support WebSocket and hence falls back to HTTP-POLLING, what will be the network overhead because of all the polling requests? I assume there will be more traffic and higher latency.

    I also want to know if it is possible to achieve 800msgs/s using HTTP-POLLING? because with the (wrong) server settings I have right now I am only receiving some 40 messages every 20 seconds.

    I hope you understood my problem..

    Thanks for your reply

  6. #6
    Power Member
    Join Date
    Jul 2006
    Location
    Cesano Maderno, Italy
    Posts
    784
    Hi,

    40 mex per 20 seconds is quite low, with factory settings I can get 4 polls per second using our Forumla 1 demo (I didn't try to let the adapter push more data so I can't exclude it can do better)
    If you want to try it for yourself open a console on the F1 demo and run this code:


    so I suppose something is posing a limit on the poll frequency.

    Did you set something to the setPollingMillis setting?

    Did you have something configured in the <min_interpoll_millis> element of the server conf file?

    Please check a couple of poll requests from the middle of a session, can you paste here the POST parameters?

  7. #7
    Member
    Join Date
    May 2012
    Location
    Kourites
    Posts
    6
    Ok so here are some tests I carried out with better proof of what is going on. Excuse me for my previous post, as I am still experimenting with settings etc..

    These are my server settings for the streaming section:

    <max_delay_millis>33</max_delay_millis>
    <default_keepalive_millis>5000</default_keepalive_millis>
    <min_keepalive_millis>1000</min_keepalive_millis>
    <max_keepalive_millis>30000</max_keepalive_millis>

    and these are my settings for the Smart Polling section:

    <max_polling_millis>0</max_polling_millis>
    <max_idle_millis>30000</max_idle_millis>
    <min_interpoll_millis>0</min_interpoll_millis>

    I first listened to the data adapter using WS-STREAMING connection which I set to send a message every 33 ms. Then I used wireshark to check what the server is sending to the client using this filtering query:

    ip.src == { SERVER IP } and ip.addr == { CLIENT IP} and tcp.port == 8080 and frame.time >= "May 28, 2012 19:20:00" and frame.time <= "May 28, 2012 19:21:00"

    That means that I filtered for all messages sent on port 8080 in 1 minute time range. I was expecting to get 1800 packets sent from server to client, since I am sending at a 33ms interval, so that should be 30.3 msgs/s * 60seconds = ~1800. Instead I got 1700packets which is 28.3 msgs/second. That is pretty close to what I was expecting and it actually is a very good performance.

    The exact same test was done again using HTTP-POLLING forced connection instead. The result was 742 replies in one minute, hence 12.36 message per second.

    Am I right in assuming that the request limit for polling is 12.36 messages per second on my computer environment? I'm pretty sure websockets could keep going up

  8. #8
    Administrator
    Join Date
    Jul 2006
    Location
    Milan
    Posts
    1,090
    In a local test on my PC, I could reach higher rates in HTTP-POLLING.
    Are network roundtrip times involved in your test?
    Or any elaboration time on the client side?
    Otherwise, we think that you should have room for improvement.

    Anyway, I must correct myself on one point.
    A couple of posts above, I wrote that in the default configuration there are no pauses introduced by the Client or Server apart from <max_delay_millis>.
    But, actually, the JavaScript Client Library also has a similar pause, which is also aimed at improving the efficiency; this pause is 50 ms; so, at the current stage, you are going to face an upper limit of 20 polls per second.

    However, we acknowledge that your use case is getting increasingly important, so we are planning to remove the limitation since the next update of the Colosseo preview release.
    In the meantime, we encourage you to obtain a limit-free version of the library by applying the following change to the current version of the library:
    - open lightstreamer.js in a text editor
    - ensure from the heading that it is Version 6.0 b2 Build 1525
    - find the following string: if(this.Uv){var j=n.uC()-this.Uv;if(d>j)d-=j;else d=1}
    - replace d=1 with d=0 at the end of the string

  9. #9
    Member
    Join Date
    May 2012
    Location
    Kourites
    Posts
    6
    I performed some new tests and I totally eliminated DOM modifications (HTML 5 canvas use) which I realized they were slowing the whole polling process down. Surprisingly I got an almost constant 20 message per second! Then I read Mr Crivelli's post and understood why it was stopping at 20..

    I performed a test which basically involved a JavaScript loop making calls at different timescales. I could scale up to 50 requests per second while still getting a server response without delays (on a local network). If I added more request per second I started to get delays and hence replies from server started getting mixed and not in order.

    I am sure lightstreamer can scale up this much with polling as well and I will try the javascript library modification you suggested to test this

    Thanks a lot, your replies were very helpful.

  10. #10
    Member
    Join Date
    May 2012
    Location
    Kourites
    Posts
    6
    Yes I can confirm that with your javascript library modification I could scale up to 30 messages\second with Polling instead of 20! and I did not test for more because I don't need more than 30

    Thanks

 

 

Similar Threads

  1. How to limit the items a user can see
    By mikelear@cityindex.co.uk in forum Adapter SDKs
    Replies: 3
    Last Post: January 31st, 2011, 01:41 PM
  2. Streaming connections limit
    By Rakot in forum General
    Replies: 1
    Last Post: May 14th, 2010, 12:06 PM
  3. How can I limit the log file size ?
    By GoatHunter in forum General
    Replies: 2
    Last Post: August 25th, 2009, 11:54 AM
  4. Maximum limit of rows & coluns in a table
    By indrajit in forum Client SDKs
    Replies: 1
    Last Post: February 22nd, 2008, 09:41 AM

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
All times are GMT +1. The time now is 11:41 PM.