Android  how to add hardware video decoding support
#46
(2013-01-22, 17:35)davilla Wrote: see CAMLCodec:Tonguerocess

get_pts_video returns the pts of the last video frame that was presented by hw decoder

GetPlayerPtsSeconds returns the pts of dvdplayer according to the audio clock

SetVideoPtsSeconds is where we tell the hw video decoder to adjust its presentation pts to match dvdplayer.

hi, davilla. For our hw device, I can set the video pts to hw decoder by some interface when I send video data to hw decoder.

I got the pts from the argument of function Decode(unsigned char *pData, size_t size, double dts, double pts), and the pts is here http://pastebin.ca/2316591 . I use this pts and cannot synchronize video and audio sucessfully.

If I use GetPlayerPtsSeconds to get the pts, the pts is here http://pastebin.ca/2316592 . There are so many zeros in the list, so I don't know how to use it.

So I am not very clear which pts is right for me to use, and could you please help to give me some suggestions?
Reply
#47
(2012-12-24, 04:31)davilla Wrote: Since aml decodes to a separate video plane that is not accessible by OpenGLES, dvdplayer and renderer runs in a bypass mode where it knows where the video is set to appear and makes a hole for it. Otherwise, it does nothing to render it as it does not handle the actual rendering of the video frames.

Most embedded solutions work the same way with hardware decoders, video is actually decoded to a separate video plane that is under the framebuffer/opengles layer and they are blended together with a hw scaler.

What do you mean by "the video is set to appear and makes a hole for it"? Does it means that the framebuffer/opengles layers are all set to be transparent? or something else?

I see the functioon ShowMainVideo in AMLCodec source code, and it seems that you directly operate the hw platform SDK. Does it means if I want to make all the framebuffer/opengles layers be transparent, our hw platform should offer the obvious interface for us ?

If there are many framebuffer/opengles layers above the video plane, how do you know how many layers need set to be transparent?
Reply
#48
(2013-02-22, 09:09)adolph Wrote:
(2013-01-22, 17:35)davilla Wrote: see CAMLCodec:Tonguerocess

get_pts_video returns the pts of the last video frame that was presented by hw decoder

GetPlayerPtsSeconds returns the pts of dvdplayer according to the audio clock

SetVideoPtsSeconds is where we tell the hw video decoder to adjust its presentation pts to match dvdplayer.

hi, davilla. For our hw device, I can set the video pts to hw decoder by some interface when I send video data to hw decoder.

I got the pts from the argument of function Decode(unsigned char *pData, size_t size, double dts, double pts), and the pts is here http://pastebin.ca/2316591 . I use this pts and cannot synchronize video and audio sucessfully.

If I use GetPlayerPtsSeconds to get the pts, the pts is here http://pastebin.ca/2316592 . There are so many zeros in the list, so I don't know how to use it.

So I am not very clear which pts is right for me to use, and could you please help to give me some suggestions?

Code:
1791
1708
1666
1750
1958

Notice anything funny with the numbers ? Like they are out of order ? PTS can be that way, out of order when going to to the decoder. But you want it ordered when using it for timing. It's important to feed the hw decoder the right PTS and use a proper PTS for timing.

You might check the current pivos source code, CDVDVideoCodecAmlogic in particular.
Reply
#49
(2013-02-22, 12:05)adolph Wrote:
(2012-12-24, 04:31)davilla Wrote: Since aml decodes to a separate video plane that is not accessible by OpenGLES, dvdplayer and renderer runs in a bypass mode where it knows where the video is set to appear and makes a hole for it. Otherwise, it does nothing to render it as it does not handle the actual rendering of the video frames.

Most embedded solutions work the same way with hardware decoders, video is actually decoded to a separate video plane that is under the framebuffer/opengles layer and they are blended together with a hw scaler.

What do you mean by "the video is set to appear and makes a hole for it"? Does it means that the framebuffer/opengles layers are all set to be transparent? or something else?

I see the functioon ShowMainVideo in AMLCodec source code, and it seems that you directly operate the hw platform SDK. Does it means if I want to make all the framebuffer/opengles layers be transparent, our hw platform should offer the obvious interface for us ?

If there are many framebuffer/opengles layers above the video plane, how do you know how many layers need set to be transparent?

On Pivos there are these layers, ordered from top to bottom

fb1
fb0
video

XBMC's OpenGLES used fb0, So to see video under it, anything you draw must be transparent. The code in AMLPlayer and AMLCodec listens to the renderer as to where the renderer wants the video to appear, then it sets the video axis.

ShowMainVideo is only used to hide the video layer so it does not show before we are ready to show it.
Reply
#50
(2013-02-23, 00:56)davilla Wrote: On Pivos there are these layers, ordered from top to bottom

fb1
fb0
video

XBMC's OpenGLES used fb0, So to see video under it, anything you draw must be transparent. The code in AMLPlayer and AMLCodec listens to the renderer as to where the renderer wants the video to appear, then it sets the video axis.

ShowMainVideo is only used to hide the video layer so it does not show before we are ready to show it.

anything above the video area should be transparent, you do this by setting the alpha of fb1 and fb0 with the hw platform SDK?
Is there some interface in XBMC which we can use for this purpose?
Reply
#51
I think I'm confused now, what exactly are you trying to do ? XBMC handles must of this already. Are you trying to code your own video player that does not use XBMC ?
Reply
#52
(2013-02-23, 06:44)davilla Wrote: I think I'm confused now, what exactly are you trying to do ? XBMC handles must of this already. Are you trying to code your own video player that does not use XBMC ?

davilla, no, I am sure that I want to do the same thing just like you do. But now, I have no idea with two things. One thing is how to how to synchronize video and audio , and the other thing is how to see the video on the video plane.

You say XBMC handles much of this already. Does it mean you do nothing yourself to make things transparent, and XBMC do the things itsef?
Reply
#53
Correct, see CAMLPlayer::SetVideoRect

The renderer knows where the skin wants the video to appear, so it calls CAMLPlayer::SetVideoRect each frame with SrcRect, and DestRect. This routine tells the hw decoder where to put the video. Now the skin setup has already defined where the video is to show and how. So in a sense, it's automatic.

If you are doing an internal player, then you get to handle video/audio. Libamplayer does this for you.

Over in Pivos repo, we now use dvdplayer and a new amlcodec for doing only hw video decode. In that, CAMLCodec::SetVideoRect does a similar job. But here, we only handle video and let dvdplayer handle audio. In this case, we keep a/v sync in CAMLCodec:Tonguerocess which is a thread that tracks/adjusts video pts to match player pts.
Reply
#54
(2013-02-24, 06:09)davilla Wrote: Correct, see CAMLPlayer::SetVideoRect

The renderer knows where the skin wants the video to appear, so it calls CAMLPlayer::SetVideoRect each frame with SrcRect, and DestRect. This routine tells the hw decoder where to put the video. Now the skin setup has already defined where the video is to show and how. So in a sense, it's automatic.

If you are doing an internal player, then you get to handle video/audio. Libamplayer does this for you.

Over in Pivos repo, we now use dvdplayer and a new amlcodec for doing only hw video decode. In that, CAMLCodec::SetVideoRect does a similar job. But here, we only handle video and let dvdplayer handle audio. In this case, we keep a/v sync in CAMLCodec:Tonguerocess which is a thread that tracks/adjusts video pts to match player pts.

davilla, thank you for your patient answer. You say you adjust video pts to match player pts, why you directly use the player pts? When I use GetPlayerPtsSeconds to get the pts, the pts is here http://pastebin.ca/2316592, can I directly use the pts as my video pts? But there are so many zeros and negative value.
Reply

Logout Mark Read Team Forum Stats Members Help
how to add hardware video decoding support0