Sending PTS with Audio packets
#1
So a little background is probably needed first,

I've been attempting to improve xbmc running on the boxeebox. The audio sink is where I started first as I had quite a few issues getting things to work properly. Well to make a long story short. I got them to work.. but not without introducing new issues. Unfortunately my changes seem to have made the audio go out of sync when passthrough is enabled.

So I went back to the old boxee source to see how they handled A/V sync, they appear to have used the PTS setting when creating audio buffers. Their audio renderer AddPackets added the pts parameter to be sent from the DVDAudio class.

I do not want to be making changes to xbmc classes unless absolutely necessary, which is why I am here. So my question is three-fold...

1. How does the GetDelay/GetLatency settings of the sink currently affect the A/V sync. Can I use these to somehow match the sync. I have attempted to modify these values sent back but without any success in getting any changes.

2. Is there another way from the Sink for me to handle the sync issue.

3. If the answer to 1&2 is I can't fix it that way, what are the feelings about changing the interfaces to allow PTS to be sent with the audio packets? I realize this discussion may be bad timing with Gotham approaching and I am not intending these to be handled immediately. While I do not mind making the change, I would at least like to have some design discussions with someone familiar with the audio architecture.

I know I'm knew to this forum, but feel free to let me know if you see something differently than I do. I'm still becoming familiar with the xbmc codebase and can be completely in left field here, so feel free to give other suggestions.
Reply
#2
GetDelay + GetLatency is the time for a just added audio sample to become audible. GetDelay is the time after which an underrun may occur if no further samples are added.
You need exactly those to control a/v sync. If this does not work, something else must go wrong.

Adding PTS with audio packets require a reference clock pts relates to. How do you want to accomplish this?
Reply
#3
Thanks for replying.

GetDelay I am handling correctly then (number of frames in the buffer divided by the sampling rate). However GetLatency is where I think I need to focus my attention then possibly. This changes based on the output mode of the intel streaming media driver. Unfortunately I have not yet found a way for the driver to report this latency back to me (that doesn't mean it doesn't exist).

Does the latency get queried multiple times during playback? Or is it queried once and then that value used for the duration of the sink? If it is queried once does it happen after the Initialization?

So unfortunately, if I can't find a way for it to report the latency value back from the driver, it appears sending pts is the best long term answer to this specific problem. The driver keeps a master clock and I would handle pts the same way the video renderer does and convert xbmc pts into driver pts. Then associate each buffer I send to the driver with this pts. This then is handled internally by the driver and kept in sync. Or so I hope I should say. The audio frames go through quite a few hoops before they reach the sink. And honestly pts only applies when syncing with video. So no pts is needed when just playing music for example. It would be even better if the sink knew at initialization if it needed to sync with video or not, that way I could conditionally create a timed audio interface.

However with all of that stated, this is a relatively large change for what appears that would only be used by one specific driver (at least for now). The larger question is, does it make sense to send pts with the audio data, like it is done with the video data. It seems on the surface that it could be useful information to all sinks, although only some can take advantage of it. And if pts is sent, we may need to inform the sink at initialization of whether pts will be coming with the audio packets.

Hopefully that is rather coherent. I probably should drink a cup of coffee before I post long winded posts like this Smile
Reply
#4
boxee source used a hw audio device that needs pts. There is also a hw video device that needs pts. A hw sync unit ties them together. Our current dvdplayer code structure is not suitable for having sync controlled by an external source.We would need a refactor.

Many non-desktop boxes have hw audio devices and external sync sources. AMLPlayer came about because of this very reason until I figured out a way to create a dvdplayer codec that handled hw sync internally.

It would be nice to have audio sinks pass pts but this becomes complicated when resampling or transcoding.
Reply
#5
We don't really send pts to video renderer, we send an absolute time related to the player's clock source. Sending a timestamp into video or audio renderer makes only sense if those drivers have access to the reference clock.
Currently we assume double buffering for video. If a user has configured triple buffering, video is off by frametime.
Reply
#6
Thanks for the replies. This is exactly the kind of information I was looking for.

Resampling/Transcoding does complicate sending pts a bit.

While pts may not be being sent explicitly for use by the video renderer, it does come along with the DVDVideoPicture class. While I haven't dived into the video rendering section as much, that is currently how the boxee-xbmc fork is handling setting the video hw pts.

I think for now the answer is, I need to come up with a method for syncing the two streams that does not involve using pts. And honestly I think I would have that if I could get the latency value. Maybe the is a way for me to discover it even if the driver doesn't directly report it.

The pie in the sky answer is, it would be nice to send pts with the audio packets Big Grin
Reply
#7
Quote:Does the latency get queried multiple times during playback? Or is it queried once and then that value used for the duration of the sink? If it is queried once does it happen after the Initialization?

Only after initialization of sink. Currently only used by Android where we observed a/v sync issues too.
Reply
#8
Thanks for the info. I think I can make this work. But so far i've had to use "magic" values for latency to get them to sync. So basically right now it is not a general fix but specific to the boxee box's observed latency values. Not ideal by any means. Hopefully there is a way for me to find the latency w/o resorting to using specific hard coded values.
Reply

Logout Mark Read Team Forum Stats Members Help
Sending PTS with Audio packets0