2012-04-10, 18:29
Hi there, i was inspired by anarchintosh's aggregate repo project (that he/she clearly never actually started) and have begun writing a repository aggregator for XBMC, in python. This means duplicating in python the code used in XBMC to download an addon from a repository.
I've written a whole lot of the python classes necessary for this already, but i need to ask the XBMC team: How does the repository framework (introduced in Dharma) check out an uncompressed addon directory from a repository? How can any http server be used to host an uncompressed repository, when there is no way of remotely walking the directory trees?
Walking seems necessary for any repository that does not use the standardised file format of compressed addons (ie. for compressed addons the names of all files in the addon directory - changelog, zip, icon, fanart, addon.xml - on the http server can be predicted and constructed).
It seems like pure magic that XBMC can download an uncompressed addon... how does it find the names of the various python filenames and subdirectories etc?
Is there possibly something funny going on with a curl wget like function?
I've written a whole lot of the python classes necessary for this already, but i need to ask the XBMC team: How does the repository framework (introduced in Dharma) check out an uncompressed addon directory from a repository? How can any http server be used to host an uncompressed repository, when there is no way of remotely walking the directory trees?
Walking seems necessary for any repository that does not use the standardised file format of compressed addons (ie. for compressed addons the names of all files in the addon directory - changelog, zip, icon, fanart, addon.xml - on the http server can be predicted and constructed).
It seems like pure magic that XBMC can download an uncompressed addon... how does it find the names of the various python filenames and subdirectories etc?
Is there possibly something funny going on with a curl wget like function?