Openelec - slow library scanning
#1
I am happily running openelec and it works fairly well. There are two things that confuse me:

1) Incredibly slow to scan my movie library. I have added 8 HDs with movies to my XBMC library, all of which are located on a windows-based server. My openelec raspberry pi accesses the server as SMB drives. It scans, but it probably takes around 10 hours for 1,000 hours. This is very weird as I have selected "NFO only". It takes less than 5min on my full-fledge HTPC (same network, also over SMB).

2) My total library includes a few thousand movies, which makes browsing the library quite slow. Is there a way that the raspberry pi pre-cashes all movie poster? I have disabled fanart to make things faster.


Thanks!!!
Server: Asus Sabertooth Z77 | Intel Core i5 3.4 GHz | 16 GB DDR3 | 128 GB SSD, 82 TB (9 x 6 TB, 7 x 4 TB)
HTPC 1: Raspberry Pi 2 | HTPC 2: Raspberry Pi 2 | HTPC 3: Raspberry Pi
Reply
#2
If the Pi is your scraping client the movie posters should be already precached if i'm correct?
Nevertheless, search for the texturecache script from Milhouse, which caches all your thumbs if they aren't in the cache already.
Reply
#3
(2014-08-08, 14:06)theowiesengrund Wrote: If the Pi is your scraping client the movie posters should be already precached if i'm correct?
Nevertheless, search for the texturecache script from Milhouse, which caches all your thumbs if they aren't in the cache already.

The Pi is scanning the library, which has NFO files and posters stored on a SMB drive on a server. I don't have the feeling that it pre-caches? Have seen the script, but the long thread indicates it is not fully bug-free. Ideally, you are right that it already pre-caches anyways. Any setting I need to enable this?
Server: Asus Sabertooth Z77 | Intel Core i5 3.4 GHz | 16 GB DDR3 | 128 GB SSD, 82 TB (9 x 6 TB, 7 x 4 TB)
HTPC 1: Raspberry Pi 2 | HTPC 2: Raspberry Pi 2 | HTPC 3: Raspberry Pi
Reply
#4
The long thread indicates that it's full of new features and active development ;-) It's really not that hard to use, but perhaps Milhouse will also jump into this thread ;-)

There shouldn't be any setting that you have to enable for caching. Are you using a MySQL library or has each client it's own library (aka VideosXX.db file)? In the last case images should be cached already.

You could do a testrun with the nc option (./texturecache.py nc) and the script will identify those items that require caching. The "c" option would actually re-cache all the missing artwork, that was identified in the "nc" run.
Reply
#5
Sounds good and probably I should give it a try. Have you installed it directly on the openelec rpi? It takes quite a while until a poster shows up, so I am quite sure this must be related to caching. I am not using mysql, but just every client has its own library. However, it is not scraped from the internet, but local nfo and local posters over SMB drive.
Server: Asus Sabertooth Z77 | Intel Core i5 3.4 GHz | 16 GB DDR3 | 128 GB SSD, 82 TB (9 x 6 TB, 7 x 4 TB)
HTPC 1: Raspberry Pi 2 | HTPC 2: Raspberry Pi 2 | HTPC 3: Raspberry Pi
Reply
#6
That is exactly my old setup (well i used NFS instead of SMB), also with local nfos and local posters and with the library on the pi.
Posters were all already cached and response time was ok. Are you using a SD card only installation or SD card + USB drive? That could also make a difference in speed i guess.
You can download the texturecache script to the pi and run it there.
The installation instructions for Openelec are on the github page of texturecache. Just give it a try.
Reply
#7
Btw, what about my question #1, which is the bigger concern. I have 20 HDs on a server and even a rescan takes hours (even if nothing added). And a full scan of all movies is taking literally days. All posters and NFOs are locally stored, so no need to scrape anything from the Internet. Thanks!
Server: Asus Sabertooth Z77 | Intel Core i5 3.4 GHz | 16 GB DDR3 | 128 GB SSD, 82 TB (9 x 6 TB, 7 x 4 TB)
HTPC 1: Raspberry Pi 2 | HTPC 2: Raspberry Pi 2 | HTPC 3: Raspberry Pi
Reply
#8
This is an ARM system with very limited processing power. It will never scan, especially such a large library, as fast as a laptop/desktop that costs 10x more and likely possesses and even larger multiplier where processing power is concerned. Your best option, use MySQL to share the library to all your HTPC's including the pi, do all the library scanning with one of the more powerful machines.
Reply
#9
Even an ARM system shouldn't take that long (days).

I can scan 665 movies with local artwork and NFOs over NFS to MySQL db in under 1 hour using the Universal Movie Scraper on a Pi. TV Shows (61 shows, 2991 episodes) also takes about 1 hour. Scanning a new episode takes seconds. This is with OpenELEC.

I'd look at the storage performance, something might be wrong with SMB. Or you're downloading cast thumbs, which can result in tens of thousands of downloads if you're not careful. Or your local artwork isn't being found and is instead being downloaded (look at your media library using the previously mentioned script to confirm the artwork locations being used).

Upload a debug log (wiki) of a scan if you can't work out where the bottleneck is.
Texture Cache Maintenance Utility: Preload your texture cache for optimal UI performance. Remotely manage media libraries. Purge unused artwork to free up space. Find missing media. Configurable QA check to highlight metadata issues. Aid in diagnosis of library and cache related problems.
Reply
#10
Thanks for your message. That was also my thinking. I don't expect miracles and I am already hugely impressed by the performance of this low-performance unit. It took me ages to buy one as I really dislike low-performance machines, but now regret that I didn't pull the trigger earlier. The RPi is amazing!!!

One and only issue remains the library scan. I don't mind hours, but days is just too long.

I am not using mysql, but the "normal" local library. Actually, there should not be any scraping happening as I set content to "local-nfo-only". I also have "actor thumbs" disabled, so don't think this is the reason either. The NFO include "actor links" though and there is a separate thread that this takes long to add them (even if nothing is scraped).
Server: Asus Sabertooth Z77 | Intel Core i5 3.4 GHz | 16 GB DDR3 | 128 GB SSD, 82 TB (9 x 6 TB, 7 x 4 TB)
HTPC 1: Raspberry Pi 2 | HTPC 2: Raspberry Pi 2 | HTPC 3: Raspberry Pi
Reply
#11
MySQL or SQLite (the local db's) shouldn't make such a significant difference, I just included that for completeness.

Check the metadata that has been scraped by running the script, for example "./texturecache.py jd movies avatar" will show you basic metadata for the movie Avatar. You should have artwork urls that being with "image://smb" confirming you are using local artwork not artwork downloaded from the internet.

What I would do now is create a new folder on your NAS, and copy in half a dozen movies (with NFOs and artwork). Enable debug on the Pi, add this new folder as a new source, let it finish scraping, then upload your debug log.
Texture Cache Maintenance Utility: Preload your texture cache for optimal UI performance. Remotely manage media libraries. Purge unused artwork to free up space. Find missing media. Configurable QA check to highlight metadata issues. Aid in diagnosis of library and cache related problems.
Reply
#12
Let me work on the weekend to get the logs. I am somewhat hesitant though as it has been taken me 2 weeks to get 80% of my library scanned. If I now create a new folder and wipe the "old" library, all the hard "work" is wasted and it may take me weeks to rescan them.

Some evidence that nothing is being scraped. My textures db is slim 5MB and my video db is slim 50MB. Given that this reflects 5,000 movies and plenty of tvshows, this does not sound too much, does it?
Server: Asus Sabertooth Z77 | Intel Core i5 3.4 GHz | 16 GB DDR3 | 128 GB SSD, 82 TB (9 x 6 TB, 7 x 4 TB)
HTPC 1: Raspberry Pi 2 | HTPC 2: Raspberry Pi 2 | HTPC 3: Raspberry Pi
Reply
#13
Rename your .xbmc folder to .xbmc.bak so that you can restore it later. The size of the database files is pretty meaningless, you need to query them to determine how much information they contain.
Texture Cache Maintenance Utility: Preload your texture cache for optimal UI performance. Remotely manage media libraries. Purge unused artwork to free up space. Find missing media. Configurable QA check to highlight metadata issues. Aid in diagnosis of library and cache related problems.
Reply
#14
(2014-08-11, 08:31)Milhouse Wrote: Check the metadata that has been scraped by running the script, for example "./texturecache.py jd movies avatar" will show you basic metadata for the movie Avatar. You should have artwork urls that being with "image://smb" confirming you are using local artwork not artwork downloaded from the internet.

Unfortunately, I failed to run the texture script. I downloaded successfully. When executing it, I am getting "access denied". Thoughts?
Server: Asus Sabertooth Z77 | Intel Core i5 3.4 GHz | 16 GB DDR3 | 128 GB SSD, 82 TB (9 x 6 TB, 7 x 4 TB)
HTPC 1: Raspberry Pi 2 | HTPC 2: Raspberry Pi 2 | HTPC 3: Raspberry Pi
Reply
#15
(2014-08-16, 03:26)steve1977 Wrote: Unfortunately, I failed to run the texture script. I downloaded successfully. When executing it, I am getting "access denied". Thoughts?

The error makes no sense. Make sure you've set the execute permission on the file (without it, you'll get "permission denied"). If that doesn't fix it, I'll need to see _exactly_ what you are typing and the error you are getting - either upload a screenshot or paste the contents of your ssh session here.

If you're using one of my OpenELEC builds, texturecache.py is built-in (it's in /usr/bin/texturecache,py).
Texture Cache Maintenance Utility: Preload your texture cache for optimal UI performance. Remotely manage media libraries. Purge unused artwork to free up space. Find missing media. Configurable QA check to highlight metadata issues. Aid in diagnosis of library and cache related problems.
Reply

Logout Mark Read Team Forum Stats Members Help
Openelec - slow library scanning0