Kodi Community Forum

Full Version: [BUG?] Is Scraper cache loading working as intended?
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Hello,

today I tried (after 3/4 year of hibernation) to use my AniDB scraper and got some really weird results in some cases. After some debugging I found that all my problem lies in values passed in $$1 (content of specified URL or cache) to subfunctions. In case content of specified URL is loaded directly (not cached) by CFileCurl::Get() everything is fine because it clears buffer internally, but when content is loaded from cache by CScraperUrl::Get() it's simply appended to the end of existing buffer. So after few subsequent subfunction calls $$1 is filled with a lot of crap (regexp engine slowdown is quite noticable) and unfortunately most recent cache content is at very end which results in wrong parses. I know that my parameter buffers aren't cleared because of use clearbuffers="no" almost everywhere but from my point of view cache reads should result only in requested content (as in case of direct web site access) and not whole "history".
Is this known bug? Can't find it in trac.

Regards
Bambi
Is what is in the cached files what you expect? Check the scraper cache and figure out exactly what's going wrong. Then post a ticket on trac and cc vdrfan, spiff + me.

Cheers,
Jonathan
Is what is in the cached files what you expect? Check the scraper cache and figure out exactly what's going wrong. Then post a ticket on trac and cc vdrfan, spiff + me.

Cheers,
Jonathan
Content of cached files is correct, it was first what i checked Smile
I posted ticket #11377.