Now it is clear for me how buffering works.

I have the following situation for which I would like to propose a feature request:

I have a very high update frequency item that a client cannot keep up receiving all of them in DISTINCT mode without loosing sometimes some records. That's for a fact.

I saw that another streaming solution does the following for streaming executed orders(price, quantity and timestamp):
- If a number of consecutive records have to be discarded (in our case, because the buffer at the client or the server side reached the alllowed limit), it groups all of them in a way similar to this SQL query:

select price, max(timestamp), sum(quantity)
group by price

Very often, in case there is an important burst of updates at the same time, they will have the same price, so they will be grouped under the same item or two and this resolves the issue.

The idea here is that LS invokes a dataAdapter or metaAdapter function with the to-be discarded records as input and it returns a smaller list based on custom logic (in my case, as explained in the above SQL).

I thought about this in the data adapter, but it turns out to be difficult (it means I have to hold the listener.update() API until I receive enough events...)

What do you think ?

Thanks
A