Is there any difference b/w sdk-dotnet adapters of Moderato & commercial editions
Hi,
is there any difference between sdk-dotnet adapters of Moderato Edition and commercial edition (that is, Lightstreamer Allegro, Presto, or Vivace)
because on my machine I installed two editions and I made some changes to dotnet adapter in moderato edition according my requirements, Here LS can push the data on to client perfectly. But when I copy and replace the same dotnet adapter onto commercial edition, The LS unable to push the data onto client continuosly (after pushing first time the connection is closing). Any suggestions regarding this issue.
No, there is no difference. We have to investigate the behaviour on the Allegro/Presto/Vivace version of your application in more depth.
Which connection do you observe it's closing? Client-Server or Server-Remote Server?
May you please show us the Server log?
I see that the connection is not really closed, but it is rebound 2 times in a few seconds (which is unusual), then it remains open until a page reload is performed.
Moreover, the "Content length too small" warning is issued, which is unusual as well;
this case should never happen, as the <content_length> setting in the Server configuration file should always be large enough.
May you please check the <content_length> setting?
I see that you subscribe to 256 fields. Are your updates very large? A Server log with the LightstreamerLogger.pump category at DEBUG would reveal this.
Dario I checked the <content_length> setting of configuration file, there I am not changing any thing it was actually a default setting.
Yes, I am subscribing for 256 fields and my updates for each filed are about 2 kb to 3 kb.
And for your information with the same settings I can push the data continuosly onto client in moderato edition but the same thing is not happening with commecial version.
Here I am attaching log files(generated today) of moderato edition & commercial edition.
If the update length is so huge, then the bandwidth restrictions become important.
Indeed the Moderato and commercial editions differ on this aspect.
Note that your client requests a bandwidth limit to the Server of 10 kbps, which is very low. While in Moderato edition the feature is not supported and the request is ignored (as shown in the log), in the commercial edition the limit is applied and this slows down the updates flow very much.
Are you really interested to bandwidth control in your application or did you just forget the "setMaxBandwidth" call in your client code?
About the "Content length too small" warning, it seems not to signal any problem in this case; however, it should be avoided, by enlarging the <content_length> setting.
To day I am again observed the behavior of pushing data by setting different bandwidhts at client side which are as follows:
1. //eng.policy.setMaxBandwidth(100000); (1 st case I am commented the bandwidth line)
So, here I observed that LS server pushing all 256 fields data at a time per every 1 1/2 - 2 minutes of time ( note that it is not streaming data continuosly)
2. eng.policy.setMaxBandwidth(100000);
Here I observed that LS server pushing all 256 fields data at a time per every 1 min 20 sec - 1 1/2 minutes of time ( note that it is not streaming data continuosly)
3. eng.policy.setMaxBandwidth(50000);
Here I observed that LS server pushing all 256 fields data at a time per every 1 min sec - 1 1/2 minutes of time ( note that it is not streaming data continuosly)
4.eng.policy.setMaxBandwidth(30);
Here I observed that LS server pushing all 256 fields data at a time per every 50 sec - 1 min 10 sec. of time ( note that it is not streaming data continuosly)
what ever the timings I mentioned above are not exact timings, some times they are also not in specified boundaries.
So, here I think that Bandwidth is not playing an important role in streaming of data.
Because, by using Moderato edition I can see streaming of data per every sec. at client side ( in moderato it is not pushing the data entire at once to client) and the pushing of changed data also takes per every second, Now which is not happening with commercial edition and it also appear as not live.
A further bandwidth limitation is probably posed by the Metadata Adapter.
You use Remote Metadata and Data Adapters, right? If you use the Remote Metadata Adapter as it comes from our examples, then this is definitely possible.
Note that both the supplied "DotNetServers.bat" and the "DotNetCustomServer.bat" launch scripts set the "max_bandwidth=40" command line argument. This instructs the Metadata Adapter (which is the ready made LiteralBasedProvider) to return 40 kbps upon "getAllowedMaxBandwidth". You should just remove those arguments.
Note that updates for a single item are always atomic. If your item contains 256 images and all of them have changed, then all that data is sent by the server in a single event. Hence, in order to honour a stringent bandwidth limit, a long pause is imposed afterwards.
This explains what you observe (if the above assumptions are right).
That solution worked for me, Now I can visualize the continuous changes at client side. But it is some what slow, Is there any way to increase the perfoemance of LS server to feel that what ever changes happening are as live.
Bookmarks