TESTING a Virtual XBMC Machine running ESXI
#46
If your goal is to get multiple XBMC instances outputting over multiple graphics cards to unique displays, you can do this with regular ol' Linux.

VM technology just isn't at the point where a PCIE graphics card can be passed through properly (the model in that field is shifting to things like the RemoteFX way of passing 3D/video via RDP to thin clients).

All you would need, is load up with graphics cards, have separate X sessions for each graphics card with unique outputs, then have a different user profile for each session with XBMC running (so their configs can be unique). From there the only mucking around is pre-defining which input devices control which sessions. Or the easy way is to have each XBMC instance have web interfaces listening on different ports (since they all effectively have the same IP address, but you could use IP Aliasing if you want and have each instance listen on 1 IP only) and using Android/iPhone remote Apps to control them. The trick will be getting the drivers to deal with each card individually (I believe the nvidia binary drivers can already do this on their own, may be wrong though).

The holy grail from there, is to use regular multi-monitor tech (Xinerama etc) per graphics card and running 2 instances of XBMC per profile (using sudo or something to run them as yet another user) which effectively doubles the number of XBMC instances you're running. Got 4 video cards? That's 8 copies of XBMC in 1 box.

On the sound front (don't forget this!) each HDMI output is a unique sound card in linux so if each card has HDMI out that's 4 lots of sound accounted automatically. Then if your mobo happens to have 8 channel audio (most do now) you could then play with Alsa to present them as 4 stereo outputs (if you go the Xinerama dual-display-per-card path). Otherwise you'll have to stack soundcards as well (USB 5.1 cards would do).

On top of that, you get the benefit of only dealing with the overhead of a single OS (not per VM) and if you run your library (SQL) locally, you remove the virtual network bottle-neck.

Now that I think about it, I may just have to test this myself....I could probably throw together a small box with 2 low-end cards (don't have HDMI ones lying around unfortunately).
Media/Gaming PC: i5-3550, 8GB DDR3, Gigabyte GTX 580 SOC, 120GB Sandisk SSD, 2TB Hitachi, Silverstone LC20B, Windows 8 with XBMCLauncher + Steam integration
Desktop: Intel [email protected], 16GB DDR3, Gigabyte GTX670 SOC running Windows 8
Server/Nas: Xeon 1230v2, virtualised Ubuntu 12.04 with Sickbeard + TransmissionBT (headless) + Sabnzbd
Reply
#47
I thought about multiple cards in a traditional box as an option awhile ago. I was a bit afraid that I wouldn't be able to figure out the customization to deal with splitting things up with different x sessions. I was also afraid that about the time I had it all working an update to Ubuntu or Xbmc would just break it.

Anyway I am on the VM route now. Mostly because I wanted to run Nas4Free for storage that could be up all the time without rebooting and not be susceptible to inadvertent breaking of the offsite backup from my constant need of tweaking of something totally unrelated.

So far I have the following VMs
Nas4Free
MythTv Backend
MythTv Frontend - Soon to also install XBMC once I figure out the MythTv integration. I plan to abandon Myth Frontend if the XBMC works out.
A win7 for testing things I don't want on my desktop
Ubuntu Server for webserver and whatever else needs to be on Ubutu, but should run the risk of breaking the living room TV.

So there ends up being some duplication, but most of those VMs only need 1-2GB ram anyhow.

As far as the passthrough goes I have the video/sound working. I did notice Myth errored out from re buffering too many times, haven't investigated that yet, could be something not related to video or maybe it is. I just wanted to post to say that it certainly has hope. My biggest concern right now is passing a keyboard through. I never thought to research that first. I thought the video was the hard part. I am going to try a bluetooth keyboard. If that doesn't work then get a usb card and pass that through.
Reply
#48
(2012-09-12, 03:07)Stewge Wrote: If your goal is to get multiple XBMC instances outputting over multiple graphics cards to unique displays, you can do this with regular ol' Linux.

VM technology just isn't at the point where a PCIE graphics card can be passed through properly (the model in that field is shifting to things like the RemoteFX way of passing 3D/video via RDP to thin clients).

All you would need, is load up with graphics cards, have separate X sessions for each graphics card with unique outputs, then have a different user profile for each session with XBMC running (so their configs can be unique). From there the only mucking around is pre-defining which input devices control which sessions. Or the easy way is to have each XBMC instance have web interfaces listening on different ports (since they all effectively have the same IP address, but you could use IP Aliasing if you want and have each instance listen on 1 IP only) and using Android/iPhone remote Apps to control them. The trick will be getting the drivers to deal with each card individually (I believe the nvidia binary drivers can already do this on their own, may be wrong though).

The holy grail from there, is to use regular multi-monitor tech (Xinerama etc) per graphics card and running 2 instances of XBMC per profile (using sudo or something to run them as yet another user) which effectively doubles the number of XBMC instances you're running. Got 4 video cards? That's 8 copies of XBMC in 1 box.

On the sound front (don't forget this!) each HDMI output is a unique sound card in linux so if each card has HDMI out that's 4 lots of sound accounted automatically. Then if your mobo happens to have 8 channel audio (most do now) you could then play with Alsa to present them as 4 stereo outputs (if you go the Xinerama dual-display-per-card path). Otherwise you'll have to stack soundcards as well (USB 5.1 cards would do).

On top of that, you get the benefit of only dealing with the overhead of a single OS (not per VM) and if you run your library (SQL) locally, you remove the virtual network bottle-neck.

Now that I think about it, I may just have to test this myself....I could probably throw together a small box with 2 low-end cards (don't have HDMI ones lying around unfortunately).

I think you have the right idea here.
Reply
#49
interesting stuff. I've got piles of hardware around for testing/playing.. I think I'll join the fray and give it a shot too. Maybe together we can get something working. Although Stewge has a lot of very valid points, I'd like to see something running on vmware if we can make it happen
Reply
#50
(2012-09-13, 18:50)dunnsept Wrote: interesting stuff. I've got piles of hardware around for testing/playing.. I think I'll join the fray and give it a shot too. Maybe together we can get something working. Although Stewge has a lot of very valid points, I'd like to see something running on vmware if we can make it happen

Great news! Glad to hear.
Reply
#51
(2012-09-06, 19:44)teaguecl Wrote: Glad you are making progress on this - it's an interesting problem. We use ESX at work to run thin clients for about 300 engineers, and it sort of works. The engineers hate the VM's for several reasons:
- They are unstable. Sometimes the server decides to close some of the client sessions due to resource constraints. One user hogging lots of resources impacts everyone.
- They are slow. Everyone wants their laptops back.
- They don't deal with hardware well. The thin clients have USB ports, but plugging a USB device into a VM is a crapshoot. Most USB HD's work, as do thumb drives etc. Our engineers are constantly plugging in strange development devices which require special drivers - it's a nightmare.

If I had to gamble, my money is on you owning 4 ATV's and an UnRaid server within the next six months Smile

You are looking more and more correct on your prediction every day!
Reply
#52
SUCCESS! Well, sort of.

My hardware is an Asus Sabertooth X58, Nvidia GTX 570 and an Ati HD5750. First I tried with the Nvidia card, which Windows recognised just fine. It downloaded and installed the correct drives and even showed in device manager as "Nvidia GTX570" but display manager just wouldn't detect the second display. I tried various hacks from the net, but after about 4 hours I gave up. Eventually I swiched to the ATI card and managed to get the video out no problem from the Ati card (ESXi was using the 570, what a waste).

The issue with the ATI card was sound. It worked, but sounded a bit like Stephen Hawking and was delayed - often up to 10 seconds. No good.

I've put this idea to bed for now until someone cleverer than I can succeed!
Reply
#53
I can't really comment as I have limited experience. But I think you should all be trying XEN instead of ESXi. I also think that as a guess OS you should try OpenElec (comes with all needed drivers) as it is such a small footprint as opposed to using the resource hungry Windows!

Edit: Maybe XBMCbuntu would be better then OE, as OE does not come as an ISO (I assume ESXi requires an ISO image)
Guide to building an all in one Ubuntu Server - TV(vdr),File,Music,Web

Server Fractal Designs Define XL, Asus P5QL/EPU, Dual Core E5200, 4gb, L4M-Twin S2 v6.2, Supermicro AOC-USAS-L8I, 1*SSD & 13*HDD drives (24TB total) - Ubuntu Server
XBMC 1 ASRock Z77E-ITX, G850, 8GB RAM, SSD, BD - Ubuntu / OpenElec frodo
XBMC 2 Revo 3700 - OpenElec frodo
XBMC 3 Raspb Pi
Reply
#54
A lot of this for vmware will depend on how it's setup. There's vmware VCenter (or some name, they change it once in a while) which is more for datacenter/server consolidation etc, and then vmware view which is for providing desktops.The issue with vmware view is that it no longer supports linux as a guest OS.
XEN has a lot of advantages over view for desktops (like PXE boot and numerous others but that's a discussion for another forum ;-)

For the view setup I run, using PCOIP my users can run any of the Adobe products (Photoshop & Premiere), CAD etc and they run just fine. RDP, not so much.
I'm bossless next week, so might just have to do some "experiments' while at work since I have some serious hardware available here
Reply
#55
(2012-09-14, 16:15)dunnsept Wrote: A lot of this for vmware will depend on how it's setup. There's vmware VCenter (or some name, they change it once in a while) which is more for datacenter/server consolidation etc, and then vmware view which is for providing desktops.The issue with vmware view is that it no longer supports linux as a guest OS.
XEN has a lot of advantages over view for desktops (like PXE boot and numerous others but that's a discussion for another forum ;-)

For the view setup I run, using PCOIP my users can run any of the Adobe products (Photoshop & Premiere), CAD etc and they run just fine. RDP, not so much.
I'm bossless next week, so might just have to do some "experiments' while at work since I have some serious hardware available here

These guys are talking about virtualizing XBMC instances, and using ESX's experimental capacity to "expose" PCI cards in the host to the virtuals. In other words, they want to have N virtuals, each attached to one of N dedicated video cards in the ESX host.

You'll never get PCoIP (or RDP, or ICA, or whatever) to perform anywhere close to what you'd need for any sort of decent video performance for use with XBMC.

Reply
#56
I know. I have run xbmc with vmware view. It worked, but wasn't stellar. Just trying to put out some more information about the differences between vmware and xen. And with PCOIP I can run HD videos without any stuttering, just that it takes basically all available bandwidth
I also use zero-clients and vmware for remote digital signage.. obviously don't need 30fps for that, but it's an option
Reply
#57
Love this thread!!

I have a proxmox/kvm box that I will test on.
for anyone that hasn't heard of it, this is basically an open source bare metal hypervisor like vshpere/esxi but fully based on linux and kvm.
I switched from vsphere a while ago and am loving it!! Big Grin

Should be interesting to see if its easier to work with hardware passthrough.
Just waiting to grab a cheap hd6570 and/or gt430 from ebay or somewhere as I don't have any spare cards.
Reply
#58
I had to give up my i7 PC and do not have anything for testing at this moment. Glad others are finding an interest in this. Will find something else and begin work again on this soon.
Reply
#59
damn i wish i had seen this post a month ago... trying to do similar setup as OP... i have a sever 2 quad core Xeon W5580 cpu's 32 GB of ram, and 3 graphics cards install, 1 is a shitt old pci card simply to display the esxi console, the other two are nvidia quadro fx4800 though i also have access to an ati radeo 5770. I live in an apt so i dont have hdmi already running in my walls. i have gigabit ethernet and was hoping that is the only ugly wire i have to run around to my rooms. also i was under the impression that HDMI would go 10 or so feet before the signal crapped out.. i also need a VM to run OSX, and linux so i figured id dedicate some ram and processing power and the video cards to 2 HTPC instance . My goal is to run windows 7 because i have an hd homerunprime that needs to use windows media center and withing wmc i have shortcuts setup to launch both xbmc or plex.. anyway the 3 physical machines i have for each tv suck up a lot of damn power and all but one of them are big and ugly.. also they all can be loud so i was hoping to do like the OP and virtualize all that shit... i havent tried the HDMI output on the cards in passthrough i just figure they would be useful in rendering the video and then the network would push the pixels to the client... but so far no luck in that dept. also i installed citrix vdi in a box to try out their HDX feature which works wonders with flash content but thats it..

common guys there has to be a solution using this setup. no i dont want or need to use linux. i need wmc for copy protected stuff and prefer having virtual machine instances running because of the versatility of managing them, backing them up, copying them, migrating them etc.

so bump.... lets see if we can any new instight
Reply
#60
I'm really interested in running xbmc in virtual machines.
I've found some projects about this using virtualbox. They seem to work, but I can't test is myselve since I don't have the needed hardware.

http://www.gefoo.org/generalfoo/archlinux-xbmc/
http://openelec.tv/forum/64-installation...enelec-iso
http://saraev.ca/xe/

Can somone with the proper hardware test these versions, and let us know how they are working??
Reply

Logout Mark Read Team Forum Stats Members Help
TESTING a Virtual XBMC Machine running ESXI0