What's the point of GPU decoding anymore?
#1
Hi everyone,

I'm hoping someone can help explain the point of going through the trouble of setting up hardware gpu video decoding, specifically on cpu's that are plenty powerful to decode everything you throw at them in software (intel i3, etc.). I've read through endless threads of people fighting to get intel gpu decoding working in linux on cpu's that can easily play back 1080P content without breaking a sweat in software. What is the up side, especially in linux where it can be a chore to get proper support? I must be missing some obvious benefits? Thanks!
Reply
#2
Because low-power processors are also getting more popular (ATOM, AMD's APUs, etc). Also, new video codecs will come out that are even more taxing (h.265 will come out in about a year and will be very hard to decode on most modern processors. It will need hardware decoding support to really take off).

Then there's ARM processors. There's no way those are going to be serious HTPC contenders without hardware decoding. Thanks to ARM processors and hardware decoding, we're very close to having $50 and less HTPCs that can handle a full bluray rip (even in 3D, so 1080+).

Then there's portables. Anyone with a laptop will tell you that hardware video decoding is a must if you want your battery to last.
Reply
#3
The ability to use less power-hungry hardware to save on space/power requirements and to realize significant reductions in noise/heat generation. Check out the Raspberry Pi and Allwinner A10 threads to see how, by using HW decoding, a tiny device with an "underpowered" CPU potentially can be used to run XBMC and playback 1080p content.
Reply
#4
because its a cool buzz word to put into a feature list



now too sit back and watch the line
Reply
#5
Becuase low powered netops like the Acer revo area very popular option as a htpc. They will play full hd using its ion gpu, but will struggle on most things if using just the atom cpu's. For me GPU decoding is a must.
Reply
#6
All true, but for me a big dis-advantage to GPU decoding is that most stuff is happening within closed-sourced drivers. In the old days a CPU & FFMPEG did most of the work for free. With the rise of GPU decoding, functionalities are once more locked down and coupled to hardware.
Reply
#7
good thing you have a choice.
Reply
#8
In many ways, general purpose computing and free/open source software (FOSS) go hand in hand. It's something to think about for a FOSS project like XBMC since FOSS HTPC <> General purpose computing.

But yes, you still can choose which hardware you'll buy.
Reply
#9
No one want's a noisy media center Smile sure an i7 can handle everything you toss at it but it's going to crank up the volume on the cooling fans when all those cpu cores get busy.

GPU decoding works and works well provided (wait for it... ) that encoding specs are followed and you don't encode with non-standard video format settings. In other words, tell the people doing the encoding to stop diddling around with settings they know nothing about in their attempts to minimize file size and keep good quality. Much smarter people have designed the encoding specs to a standard for a reason.
Reply
#10
Thank you for the replies! I completely understand the need on ARM and low powered x86 machines (atom + ion, etc.). What i'm really curious about is say i buy an i3-2xxxx processor that idles ~8 watts with a max TDP of ~17W. Apparently these can handle 1080P h.264 no sweat in software, 17W is peanuts for max TDP and likely the cpu won't get anywhere near that just decoding video anyways. For that exact use case scenario is it even worth bothering with gpu decoding if all I care about is smooth 1080P h.264 playback?
Reply
#11
Not really :D

CPU decoding can even give you (in theory) better picture quality, as a software decoder is updatable, while a hardware decoder isn't.
Reply
#12
Because XBMC's ffmpeg encoder is still single threaded, and on certian desktop CPUs it can still drop frames because the single core the thread is on could run out of resources. Smile

Though I hear they're working on finally fixing that glaring fault.
Reply
#13
An i3 should be enough to CPU decode 1080 h.264 without frame dropping, though. My 2009 core2duo can do as much.
Reply
#14
(2012-06-27, 00:36)Ned Scott Wrote: An i3 should be enough to CPU decode 1080 h.264 without frame dropping, though. My 2009 core2duo can do as much.

But remember, it's not about the whole CPU, XBMC's build ffmpeg isn't multithreaded, so only the power of a single core is relevent. I can drop frames in XBMC while decoding on an i7. Why? It's a 2.0ghz quad core i7 laptop, plenty of power but when only one core counts then 2.0ghz runs short on occasion. (Very rare occasions I'll admit, but it's happening, and it's worse in 10bit h.264 which is more demanding). In the mobile i3 range there are chips as slow as 1.3ghz per core.

This is the problem when something isn't multi-threaded in a multi-core world, the chip could be fast enough as a whole but without multi-threading it won't be efficently utalized.

Since 10bit h.264 can't be decoded using the GPU and nither will h.265 (At least not immediately, I'm SURE a solution will come around eventually) having an inefficent CPU decoder is kinda a fault of XBMC. It's the same reason that everyone rejoiced when XBMC got DXVA, it temporarily negated the issue of XBMC's inability to fully utalize the CPU.
Reply
#15
A little more info on ffmpeg being single threaded: http://forum.xbmc.org/showthread.php?tid=130180
Reply

Logout Mark Read Team Forum Stats Members Help
What's the point of GPU decoding anymore?0