LibCurl options
#1
While going over the code looking for any reason behind the various issues with high bitrate content and early-playback-abort after pausing, I found the following.

xbmc/filesystem/CurlFile.cpp Line 514
Code:
// We abort in case we transfer less than 1byte/second
  g_curlInterface.easy_setopt(h, CURLOPT_LOW_SPEED_LIMIT, 1);

  if (m_lowspeedtime == 0)
    m_lowspeedtime = g_advancedSettings.m_curllowspeedtime;

  // Set the lowspeed time very low as it seems Curl takes much longer to detect a lowspeed condition
  g_curlInterface.easy_setopt(h, CURLOPT_LOW_SPEED_TIME, m_lowspeedtime);

Is there a reason to do this? It seems that this would cause an abort any time cache fills out and bandwidth is throttled back because reads from the handle have stopped. Which will of course happen when the content is paused for long enough.
Reply
#2
Setting curllowspeedtime to a day would essentially turn this off. I've done this myself, and initial examinations seem to be that it fixes the problem of pausing causing an early abort of playback, and appears to make seeking more robust too. This makes sense, pausing and skip-back being the quickest ways to fill a buffer, and will increase the chance of a period of time with less than 1bps download.

What was the original intent of this?
Reply
#3
Had someone else confirm that setting
Code:
<curllowspeedtime>86400</curllowspeedtime>
as a workaround fixes the "Aborts playback after pause and skip" bug.

Can someone please tell me what setting CURLOPT_LOW_SPEED_LIMIT was originally meant to fix, before I submit a patch that just comments out these calls?
Reply
#4
what happens with this patch when fetching some addon stalls ?
Reply
#5
That needs to be special cased afterwards, because at the moment this directly causes bugs in viewing online video streams. Problems downloading addons are less important than problems watching media.

Using the same subsystem for downloading media content as well as downloading application plugins is probably a mistake for this reason. The two are not the same kind of thing, and need to be handled differently.
Reply
#6
Also, this doesn't affect stalls when the server sends nothing at all, as that is handled by CURLOPT_CONNECTTIMEOUT not CURLOPT_LOW_SPEED_LIMIT. I'm not sure how often an XBMC repo sends it's files slow enough to appear hung.
Instead of commenting it out, I could turn it into a special case if you tell me how to detect when a plugin is being downloaded. I'm not entirely sure if that's possible, and it should probably be moved out of SetCommonOptions if it becomes a special case.
Reply
#7
Okay, having glanced over the code again, I do not see a comprehensive and elegant way to identify from within CCurlFile the difference between Media Stream, Plugin and Metadata downloads.

Now this raises a problem.

Obviously you want to set CURLOPT_LOW_SPEED_LIMIT on Plugin and Metadata to avoid leaving potentially hung downloads running indefinitely.

But if you have CURLOPT_LOW_SPEED_LIMIT set on media streams, then pausing and backwards seeking are broken.

Information on if it should be set or not needs to be passed to CCurlFile::Stat but how and and from where it is called I don't know enough about XBMC's architecture to say.
Reply
#8
If you really want something fun to do.... =D

I'm seeing semi-related problems with streaming web based media when the server doesn't respond right away (but will in a second or so). Once XBMC receives that first chunk, it'll kick over to "Buffering..." and wait for the rest. If it doesn't get that first chunk in fairly short order, it gives up and silently fails. Sometimes this is accompanied by an error in the logs about failing to create a demuxer. The same links play fine in a web browser or vlc, so if this timeout period were configurable, that would probably fix the issue. I've played with all the advanced settings items and none of them seem to affect this initial "pre-connection" connection
Reply
#9
Another option, maintain the current use of CURLOPT_LOW_SPEED_LIMIT, but change it's default length. The problem there is that I can readily imagine someone skipping back thirty minutes to an hour in something, or pausing for a thirty minute phone call.

How *should* XBMC handle a case where someone pauses an internet video stream for an hour to sort their laundry?

I think a major problem is that there's no detection of if libcurl has abandoned downloading, and playback just continues from whatever was in buffer up till the end, making it non-obvious to the user that the download terminated.

As I said, I currently have a 24hr timeour on CURLOPT_LOW_SPEED_LIMIT and have yet to notice any issues with metadata and plugin downloads. So I wonder if the current setting is preventing a very rare bug, at the expense of creating a much more common bug.
Reply
#10
After investigation of the bugs, reports from other people who tested the same things, and looking through the source, I'm making the following conclusions.

* CURLOPT_LOW_SPEED_LIMIT combined with a restricted buffer will cause unexpected abort of playback in media when a pause or a skip back causes throttling of the transport below 1bps.
* This has been observed, and corrected by specifying a 24hour timeout in advanced settings, by myself and an independent tester.
* The assumption that a transport that has halted except for keepalives is 'hung' is invalid for media streaming, and should not have been used.
* However, assumption that a transport that has halted except for keepalives is 'hung' for meta-data and plugin files is correct.
* I have been unable to identify a way to detect from within CCurlFile::Stat what context it has been called from.
* There are multiple levels of abstraction between CCurlFile::Stat and the origin of most of those calls.
* The decision as to which file class is used is made primarily based on the URI type, divorced from the context of what is being streamed/downloaded.
* XBMC's file and stream handling architecture assumes all file and stream types are fungible. This conflicts with the different handling needed between time-sensitive media streaming, and non-time-sensitive downloads.

It is beyond the scope of a single patch to correct this issue.

I think the fundamental error here was use of CCurlFile as a one-size-fits-all downloading and streaming class, and failure to separate the two different contexts of downloading a file and streaming media for playback.

Any suggestions on how to proceed?
Reply
#11
Still waiting for feed back on how I should proceed with this?

The best idea I have so far is to create a new class inheriting from CCurlFile (CCurlStream?) that overrides SetCommonOptions, overload CFileFactory::CreateLoader to add a context indicating argument, and return a CCurlStream when a 'This is a Stream' indicator is passed. Mark the original CFileFactory::CreateLoader function as deprecated and replace all those calls with ones that include a context indicator?
Actually, that doesn't work, because there's still multiple layers of abstraction before anything gets to CFileFactory... This is really a ridiculous number of layers around creating a file/stream object.
Reply
#12
So... I thought "Oh, I could put a function in IFile that is a over-ridden by classes that support high latency streams..." but then I realise that IFiles are never directly called except for special cases, and general use calls to CFile which wraps around the kind of IFile that some other wrapper code selects for it...

The number of layers of abstraction around file handles is too many, I do not understand why this there are wrapper classes around wrapper classes around wrapper classes... Someone please explain why the file handling system is built like this?

The way it is, it is impossible for me to put in anything that allows for signalling that *this* file handle is actually a time sensitive media stream and needs special handling. That seems to be a considerable problem for internet stream media playback.
Reply
#13
Why? It is simply a flag like READ_CACHED unless I am missing something.
Reply
#14
Those flags are never actually passed to IFile child objects, unless I'm missing something. They're only read and used by CFile.
Reply
#15
Right, I'm working on what I consider a dirty hack to push these flags into IFile.

Note, I'm working on OS X as a build environment. Rebasing is a massive pain because of needing to rebuild various support parts each time, so don't expect patches to be 100% easy to merge to head.
Reply

Logout Mark Read Team Forum Stats Members Help
LibCurl options1