2014-03-18, 17:19
Hi All,
One feature I've been looking forward to in Gotham were the improvements to network / filesystem buffering - specifically, being able to enable read ahead buffering even on "local" file systems such as SMB, which is helpful when streaming video across a wireless network from a file server, as I do.
On starting to experiment with Gotham betas on both Raspbmc (miappa's builds) and Mac OS (Memphiz's airplay test 10, and now official Beta 2) in the last week I've almost immediately run into a couple of issues while testing the new buffering algorithm - one of them quite major in my opinion.
The first minor issue is that Gotham seems to have a mind of its own when it comes to observing/respecting the cachemembuffersize parameter. According to this page:
http://wiki.xbmc.org/index.php?title=HOW...ideo_cache
The default is supposedly 5MB, however on an install with no advanced settings to override this setting I'm seeing the buffer (as displayed in codecinfo) build up to 20MB on an http stream.
If I set cachemembuffersize to for example 10MB it sometimes respects it, and sometimes doesn't. On some streams/videos it will hover almost precisely at half the requested value (5MB in this case) and at other times it will hover at the requested value. At yet other times it will sometimes far exceed the specified value - I've seen it climb up to 20MB when it is set to 10MB.
Any ideas why it has such a liberal interpretation of the setting ? Or is codec info lying to me ?
The second issue which is more severe is this - by default the readbufferfactor is only 1.0 so compared to Frodo Gotham is very slow at filling the buffer as its only downloading slightly faster than the average bitrate. This means if a momentary slow down or pause in the network stream occurs within say the first 30 seconds of starting a buffer under-run occurs almost immediately and the video stops.
This can be alleviated somewhat by increasing readbufferfactor so that it gets a bit more of a move on at filling the buffer when the playback first starts - I find 1.5 works quite well without over saturating the network, and allows the buffer to reach equilibrium in maybe 10 seconds or so. So for this aspect of the problem I just disagree with the default readbufferfactor being so low, and think that it should be at least say 1.2 to allow the buffer to fill in a reasonable amount of time.
The default readbufferfactor being low however exposes a bug / regression / design flaw - Gotham will not buffer ahead when the video is paused, either manually, or automatically due to a buffer under-run.
In Frodo if you had a buffer under-run the playback would pause automatically but the download would continue, when the buffer filled to a certain threshold playback would start again automatically. If the speed of the stream was a bit marginal and a bit variable it was possible to manually pause the video for a minute to allow the buffer to fully fill then manually resume playback - which on a marginal stream may be enough to get uninterrupted playback for a further 5-10 minutes, or perhaps to the end of the clip.
This no longer seems to be possible in Gotham. As soon as I pause playback the stream stops buffering ahead - as shown by the cache figure in codec info freezing and no longer increasing. I can wait a couple of minutes then un-pause and cache will only continue to build from the point I un-pause. So manually pausing to allow the buffer to fill is no longer possible.
Worse though is that if the stream slows down enough for a buffer under-run to occur playback will automatically pause and the download will also stop just as if you had manually paused it - left alone a pause caused by a buffer under-run will never recover by itself and sit forever on pause !
If you then manually press play, because the buffer is empty and the default readbufferfactor is so low unless the stream picks up in speed almost instantly the buffer will under-run again, playback will be automatically paused and you're back to square one - paused with an empty buffer that is not filling. Sometimes its nearly impossible to get playback started again in this situation, as there is no way to allow the buffer to build up before recommencing playback.
Increasing readbufferfactor to 1.5 alleviates the symptoms a lot (but isn't a fix on its own) because the initial download speed when pressing play is 50% faster (assuming the server has enough bandwidth) allowing the buffer to fill and get you out of danger quicker, however if the available bandwidth of the server is only slightly above the average bitrate the problem can still occur because a readbufferfactor of 1.5 may not be the limiting factor then.
Bearing in mind as well that TCP slow start means that even if a server can support say 120% of the required bandwidth for a stream, it can take a few seconds for the speed of the TCP connection to ramp up to maximum after the transfer has been suspended and then resumed - its not uncommon for some servers to take up to 5 seconds after un-pausing before full throughput is regained. (Watch a packet sniffer to see this)
So it seems to me that there are two problems, one just a default settings change, the other probably an easy fix:
1) The buffer/cache is not allowed to continue downloading/filling while playback is paused. I'll take a wild stab in the dark here and guess that this is a bug in the code that implements readbufferfactor where it is measuring the current video bitrate and multiplying this by readbufferfactor to arrive at a network throughput figure - the video bitrate of a paused video is zero, therefore zero times readbufferfactor is zero...opps... (perhaps an average bitrate of the last 10 seconds of video should be used in the calculation when the video is paused)
2) The default readbufferfactor needs to be at least 1.2, maybe as much as 1.5. 1.0 is just too stingy and means that the buffer takes an eternity to reach equilibrium again after a buffer under-run or at the start of playback, meaning the buffer might as well not be there as it spends most of the time when it's most needed (starting, resuming from an under-run) nearly empty and filling very very slowly even on a fast connection.
Has anyone else noticed these same issues or can any devs offer any insights into the changes in the buffer algorithms between Frodo and Gotham, and how it is working or should be working ? Frodo had its own problems with its buffering algorithm, but apart from being able to buffer more types of streams/filesystems Gotham seems to actually be worse off thanks to no buffering ahead during pause.
One feature I've been looking forward to in Gotham were the improvements to network / filesystem buffering - specifically, being able to enable read ahead buffering even on "local" file systems such as SMB, which is helpful when streaming video across a wireless network from a file server, as I do.
On starting to experiment with Gotham betas on both Raspbmc (miappa's builds) and Mac OS (Memphiz's airplay test 10, and now official Beta 2) in the last week I've almost immediately run into a couple of issues while testing the new buffering algorithm - one of them quite major in my opinion.
The first minor issue is that Gotham seems to have a mind of its own when it comes to observing/respecting the cachemembuffersize parameter. According to this page:
http://wiki.xbmc.org/index.php?title=HOW...ideo_cache
The default is supposedly 5MB, however on an install with no advanced settings to override this setting I'm seeing the buffer (as displayed in codecinfo) build up to 20MB on an http stream.
If I set cachemembuffersize to for example 10MB it sometimes respects it, and sometimes doesn't. On some streams/videos it will hover almost precisely at half the requested value (5MB in this case) and at other times it will hover at the requested value. At yet other times it will sometimes far exceed the specified value - I've seen it climb up to 20MB when it is set to 10MB.
Any ideas why it has such a liberal interpretation of the setting ? Or is codec info lying to me ?
The second issue which is more severe is this - by default the readbufferfactor is only 1.0 so compared to Frodo Gotham is very slow at filling the buffer as its only downloading slightly faster than the average bitrate. This means if a momentary slow down or pause in the network stream occurs within say the first 30 seconds of starting a buffer under-run occurs almost immediately and the video stops.
This can be alleviated somewhat by increasing readbufferfactor so that it gets a bit more of a move on at filling the buffer when the playback first starts - I find 1.5 works quite well without over saturating the network, and allows the buffer to reach equilibrium in maybe 10 seconds or so. So for this aspect of the problem I just disagree with the default readbufferfactor being so low, and think that it should be at least say 1.2 to allow the buffer to fill in a reasonable amount of time.
The default readbufferfactor being low however exposes a bug / regression / design flaw - Gotham will not buffer ahead when the video is paused, either manually, or automatically due to a buffer under-run.
In Frodo if you had a buffer under-run the playback would pause automatically but the download would continue, when the buffer filled to a certain threshold playback would start again automatically. If the speed of the stream was a bit marginal and a bit variable it was possible to manually pause the video for a minute to allow the buffer to fully fill then manually resume playback - which on a marginal stream may be enough to get uninterrupted playback for a further 5-10 minutes, or perhaps to the end of the clip.
This no longer seems to be possible in Gotham. As soon as I pause playback the stream stops buffering ahead - as shown by the cache figure in codec info freezing and no longer increasing. I can wait a couple of minutes then un-pause and cache will only continue to build from the point I un-pause. So manually pausing to allow the buffer to fill is no longer possible.
Worse though is that if the stream slows down enough for a buffer under-run to occur playback will automatically pause and the download will also stop just as if you had manually paused it - left alone a pause caused by a buffer under-run will never recover by itself and sit forever on pause !
If you then manually press play, because the buffer is empty and the default readbufferfactor is so low unless the stream picks up in speed almost instantly the buffer will under-run again, playback will be automatically paused and you're back to square one - paused with an empty buffer that is not filling. Sometimes its nearly impossible to get playback started again in this situation, as there is no way to allow the buffer to build up before recommencing playback.
Increasing readbufferfactor to 1.5 alleviates the symptoms a lot (but isn't a fix on its own) because the initial download speed when pressing play is 50% faster (assuming the server has enough bandwidth) allowing the buffer to fill and get you out of danger quicker, however if the available bandwidth of the server is only slightly above the average bitrate the problem can still occur because a readbufferfactor of 1.5 may not be the limiting factor then.
Bearing in mind as well that TCP slow start means that even if a server can support say 120% of the required bandwidth for a stream, it can take a few seconds for the speed of the TCP connection to ramp up to maximum after the transfer has been suspended and then resumed - its not uncommon for some servers to take up to 5 seconds after un-pausing before full throughput is regained. (Watch a packet sniffer to see this)
So it seems to me that there are two problems, one just a default settings change, the other probably an easy fix:
1) The buffer/cache is not allowed to continue downloading/filling while playback is paused. I'll take a wild stab in the dark here and guess that this is a bug in the code that implements readbufferfactor where it is measuring the current video bitrate and multiplying this by readbufferfactor to arrive at a network throughput figure - the video bitrate of a paused video is zero, therefore zero times readbufferfactor is zero...opps... (perhaps an average bitrate of the last 10 seconds of video should be used in the calculation when the video is paused)
2) The default readbufferfactor needs to be at least 1.2, maybe as much as 1.5. 1.0 is just too stingy and means that the buffer takes an eternity to reach equilibrium again after a buffer under-run or at the start of playback, meaning the buffer might as well not be there as it spends most of the time when it's most needed (starting, resuming from an under-run) nearly empty and filling very very slowly even on a fast connection.
Has anyone else noticed these same issues or can any devs offer any insights into the changes in the buffer algorithms between Frodo and Gotham, and how it is working or should be working ? Frodo had its own problems with its buffering algorithm, but apart from being able to buffer more types of streams/filesystems Gotham seems to actually be worse off thanks to no buffering ahead during pause.