I have a couple of questions I hope you guys can answer.
My understanding of the DVDPlayer video pipe is roughly:
1.
Player receives a
demuxer package
2. The
demuxer package is sent to the
Video Codec for decoding.
2.1 The
Video Codec is split into two components, DVDVideoCodecXXX and XXXVideo where the former in many cases is nothing but a thin wrapper around the latter. Is there any particular reason for this split? Should I intentionally not merge the two components into a single one?
3. The
Video Codec absorbs the data and optionally responds that a
Picture is available.
4. If a
Picture is available, the
Renderer is configured for handling pictures of that type if it isn't already.
5. The
Picture is forwarded to the
Renderer
6.
Rendering UI (and waiting for vsync)
Sleeping for
time left in frame
This is quite an inefficient use of the hardware leading to pipeline bubbles and choppy framerates. What's the recommended way to keep the hardware busy?
Currently I query the status of the hardware and if it's less busy than some threshold the
Video Codec reports a "dropped" frame just to avoid steps 4-6 until the hw queue is somewhat full again. The pipeline respects pts' so there's no adverse effect of feeding it lots of data in bursts, but this does mean that there are a lot of non VC_PICTURE returns from the
Video Codec's Decode function. Is there any issue with that? I was told this reports a low video frame rate somewhere despite that it is actually silky smooth.
The hardware video pipe is its own state machine and should be told when playback is paused, resumed, fast forwarded or rewinding. What's the recommended way of hooking that up?