Hello Antonio,

The most likely explanation is that the delay is caused by a TCP retransmission of a lost packet.
As far as I know, the TCP uses indeed this kind of exponential backoff.
You can ascertain this with a network capture.

However, lost packets can be recovered in a faster way when they are followed by other packets.
These conditions can be achieved by LS Server configuration, by enlarging the TCP send buffer (see the <sendbuf> element).

In fact, LS Server, by default, enforces a small send buffer, which ensures that, in case of a temporary interruption of the communication, the amount of data blocked is minimal. In fact, during the interruption, subsequent data becomes old and had better be replaced with newer data by conflation, but the Server cannot do that once they have reached the TCP send buffer.
With a small buffer, the TCP would immdiately send the backpressure to the Server, which, in turn, would conflate the data.

However, the default settings only work well when the throughput of the session is small and the packets sent are small either.
When the throughput is larger and packets are larger, a larger send buffer should be configured, in order to take advantage of the TCP mechanisms, as said. The amount of blocked data would also be larger, but still acceptable, considered the high session activity.

You should find the best setting for your scenario with a few tests; for instance, you can start with 5000.
An adaptive mechanism for the send buffer setting is not yet available.