Page 2 of 2 FirstFirst 12
Results 11 to 19 of 19
  1. #11
    Administrator
    Join Date
    Jul 2006
    Location
    Milan
    Posts
    1,079
    Quote Originally Posted by ManKeer View Post
    BTW, what will happen if I configured both clients for "unfiltered" subscriptions, how this will affect the performance?
    Preventing filtering can have an effect when the overall update flow to a client is so huge (or the client/network so slow) that it is impossible that all available updates are processed by the client.
    In this case, some updates have to be filtered out and this requires that some subscription with a huge flow is not "unfiltered".
    But other items with a low flow can still be subscribed "unfiltered".
    As long as the update flow is manageable, there is no significant performance difference between "unlimited" and "unfiltered".
    Only in rare cases of race conditions (like two subsequent updates in short sequence), "unfiltered" prevents possible filtering-out of some update.

    Another case in which "unfiltered" makes the difference is when there are licensing restrictions that put an a-priori limit on the update frequency of an item.
    But, in this case, this only works if the average update frequency of the item is lower than the license limit.

    Note that, for MERGE mode, you can reduce filtering in almost the same way by enlarging the buffer, which by default is 1, through setRequestedBufferSize.

    That said, I'm still not sure what of the above applies to your scenario.

  2. #12
    Senior Member
    Join Date
    Dec 2019
    Posts
    55
    Hi Dario,

    I have attached the logs that I took the snippet from.

    you will receive an email from m.ankeer@gmail.com.

    you can check all the sessions.

    Regards

  3. #13
    Administrator
    Join Date
    Jul 2006
    Location
    Milan
    Posts
    1,079
    The log shows that there is an underlying frequency limit of 3 updates per second due to the license in use.
    This accounts for the suppression of some update when two or more are produced in short interval.
    As discussed above, you can request the item as unfiltered, but you must be sure that the average frequency of updates of this item is steadily less than 3 per second.
    Otherwise you can use setRequestedBufferSize and determine a size that is a trade-off between possible lost (i.e filtered out) updates and possible update delays.

    This does not explain why an item in COMMAND mode yields fewer updates than an equivalent item in MERGE mode.
    We can investigate this case, but please provide some precise references.
    If you saw this happening in the run of the provided log, please specify two item names for the COMMAND and MERGE subscriptions with different behaviors.

  4. #14
    Senior Member
    Join Date
    Dec 2019
    Posts
    55
    Thanks Dario,

    But I need the answers for the following questions:

    1 - Is there any way to increase the (max frequency update)/item/second to more than 3?
    2 - What is the max message size that can be sent?
    3 - for the purpose of BW, which is better: sending small message to equivalent item [ # items = 400]... OR.... combining all messages in one big message and send it to common item [common item for 400 items].

    4 - for BW calculation purpose: each client has one connection (one session), it has subscribed for 100 items, each item has 3 updates per second as you mentioned, if each message size is 50 bytes, then BW required is:

    100 * 3 * 50 * 8/1000 = 120 kbits/second, Am I right???

    if we combine those 100 items in 1 item, the max message size is 5000 bytes, then new BW is:
    1 * 3 * 5000 * 8/1000 = 120 kbits/second, same as above?

    so if the above is true, I need to know what is the effect of large message size?

    Thanks for your kind help and usual support.

  5. #15
    Administrator
    Join Date
    Jul 2006
    Location
    Milan
    Posts
    1,079
    In a general scenario, separate items perform better, because it may happen that only part of the 100 items get an update, while others don't.
    In this case, only the updates for items that really change are sent.
    On the other hand, for a single big item, a single combined update is sent every time any of the 100 values included is different.
    You could reduce this problem by using fields, so that you can use a single item with 100 different fields.
    In this case, in the combined update, the values that don't change are not included.
    But the set of fields is fixed, so this assumes that the needed set doesn't change in time.

    So, only if you confirm that all 100 items change values every time, is your count correct, but for some encoding overhead to be added.
    With different items the overhead is quite small (about a dozen bytes per update, hence to be added to the 50 figure).
    With a single item, the Lightstreamer overhead is small, but the overhead needed to pack the 100 values in a single one has also to be considered.
    More overhead is due the generation of websocket messages and TCP packets.
    With many updates, Lightstreamer can pack multiple updates in a single message, but it tries to forge small messages, hence with 100 items it may use more messages and add some overhead, whereas with one big item it is forced to use a single message.
    However, small messages give some benefits, for instance when the communication channel is narrow.

    To resume, assuming that the bandwidth is the same (i.e. all values change every time), single large message should not have a negative impact.
    The differences are at implementation level (like memory fragmentation) and should not be significant in normal situations.
    In fact, Lightstreamer doesn't enforce a maximum message size.
    Obviously, really big sizes may incur in resource issues, but a safety size is difficult to estimate.

    About update frequency, you can experiment with unlimited frequency by configuring a DEMO license
    (you should set DEMO in <license_type> and <contract_id> in lightstreamer_edition_conf.xml).
    This limits the number of sessions to 20.
    If you opt for upgrading your license, you can contact info@lightstreamer.com

  6. #16
    Senior Member
    Join Date
    Dec 2019
    Posts
    55
    Now, my data source push messages based on the lightstreamer rate.

    however, my java client receives messages but the javascript client doesn't receive, and sometimes vice - versa.

    How come this happened, based on my understanding, if the message received by one client, it should be received by the other. since both subscribed to the same item.


    Would you please explain?

    I have sent you the logs from business email m.ankeer@aljaziracapital
    Last edited by ManKeer; August 27th, 2023 at 02:42 PM.

  7. #17
    Administrator
    Join Date
    Jul 2006
    Location
    Milan
    Posts
    1,079
    For the general question posed, I just confirm that, when filtering is possible, it is done on a client-by-client basis.
    This, in case of particular race conditions, accounts for different identical subscriptions receiving data in different ways.

    Now I see that the case shown is not the typical race condition that can happen due to the underlying frequency limit.
    In this specific case, the Server had enough time available to send the event, but it didn't.

    My only explanation is that the connection could not work at full speed.
    If previous writes on the connection could not be flushed in short time, then the subsequent write might have been blocked and the next event could have been kept long enough to be covered by the following one.
    I see that the affected session was reopened a few minutes earlier, after a failed attempt to recover from a previous session, which must have had connectivity issues.
    So, perhaps, the communication is disturbed in some way.

    As an experiment, you could try enlarging the send buffer used by the Server, which, by default, is small, because, in normal usage scenarios, filtering is encouraged.
    For instance, you can set
    <sendbuf>5000</sendbuf>
    and see if there is any improvement.

    Unfortunately, with COMMAND mode, you cannot leverage the client's setRequestedBufferSize setting, as the buffer in COMMAND mode is always a single update per each key.

    BTW, can you confirm that the Metadata Adapter doesn't introduce any bandwidth limit?
    This is possible, if your adapter configuration was inspired by our demo examples.
    You may send us adapters.xml for a check.

  8. #18
    Senior Member
    Join Date
    Dec 2019
    Posts
    55
    Dario,

    I want to know if the string compression can affect the BW, i.e., if we compress the sent message, do we decrease the required BW, or it doesn't affect the BW anymore?

  9. #19
    Administrator
    Join Date
    Jul 2006
    Location
    Milan
    Posts
    1,079
    If I understand correctly, you would like to try this in order to have the update packets flow faster and prevent race conditions that would lead to conflation.
    BTW, did you try the experiment with the <sendbuf> setting?

    I confirm that compressing the values may have an effect and, in fact, in the case of your data, according to your logs, it seems quite redundant.
    But perhaps, instead of compressing it, you may try to remove the redundancy at the origin.
    In fact, I see many \\u003d parts in your update values. This is a unicode representation of the = symbol.
    It is not clear to me how they are quoted, but perhaps they can be quoted in a shorter way.

    Lightstreamer itself uses several techinques to reduce the bandwidth needs.
    If you leverage different fields to separate the various parts of your updates, then you can take advantage of the delta delivery feature, which, in many cases, reduces the size of the updates.
    On the other hand, if you use the json representation and cannot leverage fields, you could take advantage of the support for sending deltas in the jsonpatch format.
    For big fields of generic type, there is another kind of reduced delta, based on the "diff-match-patch" algorithm.
    These jsonpatch and diff-match-patch diffs require Server version 7.3 or higher and also compliant client SDKs.
    Note that they can be more or less effective depending on the form of the various updates.
    For this reason, they are not applied by default, but by configuration. We can expand if needed.

    Only if deltas cannot be used, there is the possibility to apply a zlib compression to the whole data in transit.
    At the moment, this is not available for websockets, but only for http streaming, which, hence, should be enforced.

 

 

Similar Threads

  1. Replies: 1
    Last Post: April 16th, 2014, 09:28 AM
  2. Replies: 3
    Last Post: July 22nd, 2013, 09:54 AM
  3. Difference between DISTINCT and MERGE mode?
    By hungtt in forum General
    Replies: 1
    Last Post: January 4th, 2011, 12:07 PM
  4. Difference between createEngine and seekEngine
    By webfg in forum Client SDKs
    Replies: 2
    Last Post: April 13th, 2009, 11:07 AM
  5. Replies: 18
    Last Post: March 19th, 2008, 10:00 AM

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
All times are GMT +1. The time now is 02:48 AM.