We can't think of conditions in which polling behaves better than streaming other than when the client-side processing capacity is slower than the update rate. Of course, from a general perspective, there are conditions in which streaming is not supported by the infrastructure, hence polling is better; all those cases are addressed at an early stage by the Stream-Sense service, which is provided by all client libraries, so we can ignore this matter.

For the cases where we say that "polling is better than streaming" we always assume that update filtering is possible. In fact, in terms of overhead, streaming is always the lightest choice (but for possible caching on the client, which can be limited, though). However, polling can become even lighter, because it can enforce more filtering.

In a streaming scenario, if the client-side processing is slower than the update rate, the updates will form a queue on the client side; as soon as the client processes the first event in the queue, the Server will send a new event at the end of the queue, so that the client will process all events with some delay.

In a polling scenario, when the slow client receives a poll answer, it will take some time to process all updates;
when finished, it will immediately ask for a new poll (we are always talking about "long polling") and will get the answer immediately; so, each poll will contain up-to-date updates and the first updates will be processed with short delays. In the polling scenario it is important to ensure that the poll responses don't become too big. This depends on the subscriptions: for items that allow filtering there can be only one update for each poll; only items that don't allow filtering can cause problems.

Note that all the above refers to the case in which the client-side processing is slow. The case in which the network connectivity is poor is a different one; moreover, poor connectivity can manifest itself in various ways. The Server always tries to be robust and to keep streaming connections working; if it can't write, it will just wait and possibly filter updates. When block is over, the connection will resume seamlessly; by keeping a small TCP send buffer, we ensure that few updates will have remained queued.
If the blocks are long and nothing has been received on a streaming connection for long time, then all LS client libraries will close the connection and the whole session; some of them will also try to open a new session automatically, but still in streaming. We have no evidence that switching to polling in that case could be better and our slowing algorithm is also not targeted to this case.