2013-11-23, 11:34
I'm also a member of the diyAudio forum but haven't been involved in the threads that stef_tsf has mentioned here, though I am enthusiastic about audio DSP and have been using xbmc (more recently openelec) for years for my main system and see no reason that would change in the foreseeable future. Shoehorning all of the audio playback DSP into a HTPC application leaves you with one source only for AV content ... I'd rather xbmc devs stick to their current plans, I've looked and they seem to fit everything that any sane person would want.
Having said that I have seen that there are some challenges with integrating audio DSP device to HTPC. Here are my thoughts, when I have time myself I'll be looking at these.
- jsonrpc api allow announce of audio stream details (bitrate and bit depth)
- jsonrpc api allow access to audio stream delay setting
I checked if this was possible earlier this year and I know there was some overhaul of this section of the code so I left it alone till things had calmed down in that area.
This would allow for things where the audio processing delay is long and impacts on AV synch but the delay is different for audio data of different rates (this happens a bit, I'll leave the boring details out)
With that all video content delay adjustments could be managed via jsonrpc from a network connected audio DSP device ... with minimal code changes (and maintenance) necessary on xbmc end.
I imagine that the start of playback with a device like this would require a setup procedure similar to the way xbmc currently switches display to native refreshrate on initialisation of the video playback, except that the communication would be via jsonrpc rather than display devices.
Having said that I have seen that there are some challenges with integrating audio DSP device to HTPC. Here are my thoughts, when I have time myself I'll be looking at these.
- jsonrpc api allow announce of audio stream details (bitrate and bit depth)
- jsonrpc api allow access to audio stream delay setting
I checked if this was possible earlier this year and I know there was some overhaul of this section of the code so I left it alone till things had calmed down in that area.
This would allow for things where the audio processing delay is long and impacts on AV synch but the delay is different for audio data of different rates (this happens a bit, I'll leave the boring details out)
With that all video content delay adjustments could be managed via jsonrpc from a network connected audio DSP device ... with minimal code changes (and maintenance) necessary on xbmc end.
I imagine that the start of playback with a device like this would require a setup procedure similar to the way xbmc currently switches display to native refreshrate on initialisation of the video playback, except that the communication would be via jsonrpc rather than display devices.