2011-08-21, 01:17
I'm working on a feature for XBMC that, initially, will require audio from a line in or microphone to be sent to the visualizer. How to best accomplish this?
My thought was to create a new protocol, linein://. Then, when XBMC is requested to play linein://, it loads a LineInCodec. The LineInCodec simply reads in the waveform and echos it to paplayer (and, from what I understand, paplayer already hands the audio and fft to the visualizer using OnAudioData()).
linein:// wouldn't have any directories, files or options after the ://. Therefore, I can avoid creating a CLineInDirectory. The OS layer would need to be abstracted, and I am unsure whether I should do this in LineInCodec or create a CFileLineIn. Either way, the actual line in device would be chosen in the settings and the OS abstraction would use this setting to choose the appropriate device.
Is this an appropriate way to do this? Should I implement a CFileLineIn or offload microphone reading to the LineInCodec? Should the protocol be called linein://, audioin://, microphone://, device://XXX or something else?
I don't care about latency at all. However, I know there's a lot of people requesting microphone for karaoke, so ideally I hope to do this feature in such a way that it can be used to easily enable karaoke in the future.
My thought was to create a new protocol, linein://. Then, when XBMC is requested to play linein://, it loads a LineInCodec. The LineInCodec simply reads in the waveform and echos it to paplayer (and, from what I understand, paplayer already hands the audio and fft to the visualizer using OnAudioData()).
linein:// wouldn't have any directories, files or options after the ://. Therefore, I can avoid creating a CLineInDirectory. The OS layer would need to be abstracted, and I am unsure whether I should do this in LineInCodec or create a CFileLineIn. Either way, the actual line in device would be chosen in the settings and the OS abstraction would use this setting to choose the appropriate device.
Is this an appropriate way to do this? Should I implement a CFileLineIn or offload microphone reading to the LineInCodec? Should the protocol be called linein://, audioin://, microphone://, device://XXX or something else?
I don't care about latency at all. However, I know there's a lot of people requesting microphone for karaoke, so ideally I hope to do this feature in such a way that it can be used to easily enable karaoke in the future.