The first log suggests that the JVM process is under CPU shortage, since we see that all the internal thread pools are unable to dequeue their tasks.
Moreover, the "Extra sleep" statistics from the Internal Monitor log shows that all threads that issued Thread.sleep were scheduled (on the average) 90 ms later than expected. Note that normally this figure is 0 and we have found it quite lower even in problematic cases.

Have you traced the process CPU usage? Could you confirm the suspect in some way?
Are there other processes in the host that may compete with the Server JVM for the CPU resource?

A possible cause for a high CPU usage is frequent Garbage Collections, caused by either memory shortage or a very intense allocation activity and, in fact, the second log snippet clearly shows a GC activity issue.
However, the latter may be just a consequence of previous problems. We can get no evidence of memory shortage in the first log, also because just a couple of samples of the Internal Monitor log are available.

To analyze the memory requirements of your usage scenario, we should collect many samples of the "Free" and "Total Heap" statistics (with LightstreamerMonitorText set as TRACE) while the Server is behaving normally.
By following the changes in the used (i.e. total - free) heap, we can estimate the rate of memory collecting.
Obviously, you could gather better measures by acting at system level (that is, outside of Lightstreamer scope) and having the JVM log its GC statistics, by configuring the proper settings in the launch script.

So, you should monitor the whole lifecycle of the Server process, then see if there are any linear relations between:
- the CPU usage by the JVM process;
- the GC activity in terms of collected memory;
- the number of sessions and/or the "Outbound throughput".
This should allow you to understand if the problem is due to an unsustainable level of activity.