Video addon performance on Pi
#16
Some time ago I read some about optimizing the sql queries.
Some queries are done wrongly. So optimizing would speedup browsing through the menus.
Reply
#17
There is some evidence that python regular expression matching is a cause of slow add-ons on the Pi. We may be able to do something about that, but first need some evidence.

First, what are the worst add-ons for speed on the Pi? I only use iPlayer which is quick enough.

Any examples where it takes 30 seconds or longer to browse through file lists?
Ideally these should be add-ons that are much faster on a PC (i.e. the slow part is definitely at the Pi end, rather than an overloaded server).
An add-on that takes ages to produce a small list is more interesting that one that takes ages to produce a massive list.

Describe what the add-on is, any special set up options, and exactly how you navigated and how long each step takes.

We can't discuss any piracy add-ons directly, so if you have information that seems relevant, then post some general information, and PM me the name of the add-on.
Examples of allowed add-ons (e.g. ones with thread on this forum) are obviously better.
Try to avoid geo-locked add-ons (unless they work from the UK).

I'd also be very interested to hear from any add-on authors who have used the Pi and have some ideas what the slow operations are.

At the moment, I just want to know about add-ons that work correctly but are slow. No reports about streams that don't start or otherwise unreliable add-ons.
Reply
#18
Unfortunately most of the add-ons that are slow on the RPi are of the "can't be discussed here" variety...iPlayer is a very lean fast add-on so doesn't really count - I too find that more than fast enough on the RPi.

One legitimate one that you could try which is a bit slow but not terrible is Apple iTunes Trailers - the initial load of the list of movies is rather slow, (20+ seconds, and about 2 seconds on a fast PC) unfortunately it caches the list so the next time its quite quick. (Not sure where to delete the cache off hand)

To be honest, Gotham seems to be a lot faster than Frodo on the RPi including add-on performance to the point where the slowness of Add-on's is much less of a concern, and it becomes difficult to judge whether any remaining slowness is genuinely just the CPU reaching its full potential or whether there is still a lot of room to optimise...
Kodi 18.3 - Mid 2007 Mac Mini, 4GB, 2TB HD, Windows 7 SP1
Kodi 18.3 - Vero4k, Raspberry Pi 2. OSMC.
Reply
#19
Another candidate you may look at is PBS. All or some of the content may be geolocked, but you should be able to browse the directories. Free Cable is also super slow on the Pi - it's not at addons.xbmc.org, but it is freely discussed on the board.

If there is a video addon that is not slow on the Pi, I have not seen it. I find iPlayer pretty slow, but its not so bad because the hierarchy is not so deep — often what you want is just one click away.

Addons where your target is 4 or 5 clicks away, or addons where you need/want to browse around, are a negative user experience on the Pi even if the lag is well under 30 seconds.
Reply
#20
I have uploaded three files that should assist with testing regex vs. re, and obtaining profiling data.

timer_regex.py

This is a "proxy" package that will intercept calls to re.* methods (compile, findall, sub etc.) and log timings for both the re and (if available) regex calls. The data will be written to a file in /tmp named after the currently executing addon, eg. /tmp/plugin.video.iplayer.dat

If the regex package is not installed, then only the timings for re will be recorded.

Place this file in the root directory of the addon to be analysed, eg. /storage/.xbmc/addons/plugin.video.iplayer/timer_regex.py

importhook.py:

This is a hack to subvert the import process and ensure the addon (and any "helper" addons or scripts, but not system libraries) uses the timer_regex package whenever "re" is imported.

Place this file in the root directory of the addon to be analysed, eg. /storage/.xbmc/addons/plugin.video.iplayer/importhook.py

Each addon usually has a file called default.py, and the call to "import importhook" needs to be added to default.py as the first import, before any other import. This will "hook" calls to "import re" so that "timer_regex" is imported instead of "re".

Also add "importhook.unload()" as the last line in default.py - this will allow the elapsed time for all processing to be recorded for analysis. This last line is optional, and can be omitted if you are not interested in the elapsed time analysis, however having this information available helps give some context/perspective to the other timings - eg. is a 5 second optimisation worthwhile when the overall elapsed time is 2 minutes? If you don't know the elapsed time, it's harder to say if a potential optimisation is likely to be of any benefit.

The importhook method has been shown to work with iPlayer, SportsDevil, Daily Motion and YouTube addons.

Some addons, such as YouTube, use other addons and scripts to implement their functionality, eg. YouTube has a dependency on script.module.simplejson, script.module.parsedom and script.module.simple.downloader. importhook will automatically ensure these "helper" addons use timer_regex too whenever called by the addon being analysed (there is no need to modify the "helper" addons/scripts).

analysis.sh:

A simple analysis tool. See "analysis.sh -h" for help.

It will display the accumulated details for each regular expression event (ie. each time a method is executed), and will show the most frequent and most expensive regular expression methods used by the addon.

Example:
Code:
rpi512:~ # ./analysis.sh -i /tmp/plugin.video.iplayer.dat -a -f -e -m findall
Method       Freq  regex vs.   re  Avg +/- us  |             re (min/max/avg/total)             |            regex (min/max/avg/total)
c.compile     644     28 vs   616   -313.5941  |  62.9425 /  43292.9993 /   681.6713 / 0.4390s  |  99.8974 /  69763.1836 /   995.2654 / 0.6410s
c.findall     619     52 vs   567    -21.0925  |  53.8826 /   4201.8890 /    98.9822 / 0.0613s  |  85.8307 /   4757.1659 /   120.0748 / 0.0743s
c.match         2      1 vs     1    -15.1396  | 113.9641 /    254.8695 /   184.4168 / 0.0004s  | 136.1370 /    262.9757 /   199.5564 / 0.0004s
c.search      619     44 vs   575    -43.4284  |  45.7764 /   4821.0621 /    71.8811 / 0.0445s  |  56.0284 /   4237.8902 /   115.3095 / 0.0714s
d.findall    4945    316 vs  4629   -100.3489  | 117.0635 / 280357.1224 /   296.9627 / 1.4685s  | 216.0072 /  32974.0047 /   397.3115 / 1.9647s
d.sub          82     14 vs    68   -489.4821  | 133.9912 /   7480.8598 /   374.0787 / 0.0307s  | 171.8998 /  24275.0645 /   863.5608 / 0.0708s
===============================================================================================================================================
TOTAL        6911    455 vs  6456     -0.7783s |                                       2.0443s  |                                       2.8226s

ELAPSED TIME less re   :   25.6184s
ELAPSED TIME less regex:   24.8401s
ELAPSED TIME TOTAL     :   27.6627s
PERF LOGGING OVERHEAD  :    9.1602s (included in above elapsed times)

Methods prefixed with c. are compiled patterns. Methods prefixed d. have been called with a pattern requiring compilation.

Top 10 most frequent findall() calls (ranked by descending frequency):

    619 d.findall       <updated[^>]*>(.*?)</updated>
    619 d.findall       <title[^>]*>(.*?)</title>
    619 d.findall       <link rel="self" .*title=".*pisode *([0-9]+?)
    619 d.findall       <link rel="related" href=".*microsite.*title="(.*?)" />
    619 d.findall       <id[^>]*>(.*?)</id>
    619 d.findall       <content[^>]*>(.*?)</content>
    619 d.findall       <category[^>]*term="(.*?)"[^>]*>
    619 c.findall       PIPS:([0-9a-z]{8})
    530 d.findall       <link rel="self" .*title="([0-9]+?)\.
     80 d.findall       iplayer/categories/(.*?)/list

Top 10 most expensive findall() calls (ranked by regex time):

-       247383.1177 us  280357.1224 us  32974.0047 us   d.findall       16      <entry>(.*?)</entry>
-        4199.0280 us   32804.0123 us   28604.9843 us   d.findall       0       <\?xml version="[^"]*" encoding="([^"]*)"\?>
+       15471.9353 us   10276.0792 us   25748.0145 us   d.findall       16      <link rel="related" href=".*microsite.*title="(.*?)" />
+       11581.1825 us    9920.8355 us   21502.0180 us   d.findall       16      <link rel="self" .*title=".*pisode *([0-9]+?)
+       12005.8060 us    7300.1385 us   19305.9444 us   d.findall       16      <link rel="self" .*title="([0-9]+?)\.
+        8353.9486 us    4940.0330 us   13293.9816 us   d.findall       16      iplayer/categories/(.*?)/list
+        2883.1959 us   10088.9206 us   12972.1165 us   d.findall       16      <updated[^>]*>(.*?)</updated>
+         500.9174 us   12237.0720 us   12737.9894 us   d.findall       16      <category[^>]*term="(.*?)"[^>]*>
+        6442.7853 us    5768.0607 us   12210.8459 us   d.findall       16      <content[^>]*>(.*?)</content>
+         237.7033 us   10388.1359 us   10625.8392 us   d.findall       16      <title[^>]*>(.*?)</title>

Downloading:

Code:
wget www.nmacleod.com/public/regex/timer_regex.py
wget www.nmacleod.com/public/regex/importhook.py
wget www.nmacleod.com/public/regex/analysis.sh
chmod +x analysis.sh

then copy the timer_regex.py and importhook.py files into the root of the addon to be analysed.

Enable by top & tailing default.py with:
Code:
import importhook
...
importhook.unload()

Note: Depending on how the addon is written, you may need to indent the last line.

Disable by simply removing the two importhook references from default.py.

Understanding the results

One of the most common issues I've seen is repeated calls to re.compile() for a static pattern in a loop while iterating over data. For instance in iplayer, this can be seen when navigating to Categories -> Childrens, where re.compile() is called over 600 times when it could be called just once.

In fact, as a general rule, wherever possible, addons should be compiling all static patterns and compiling them only once.

What often happens however is that patterns are not being compiled explicitly, which means the re/regex package has to lookup it's internal cache to see if the pattern has been compiled previously, which wastes time as it can often take longer to find a pattern in the cache than it would to compile the pattern in the first place.

So rather than:
Code:
for string in lots_of_data:
  result = re.sub("/some pattern/", string)
the following code could be used which avoids the compile cache overhead:
Code:
re_pattern = re.compile("/some pattern/")
for string in lots_of_data:
  result = re_pattern.sub(string)

And rather than repeatedly calling a function which then compiles a static pattern, compile the patterns once for the entire module:

So instead of:
Code:
def series_match(name):
    # match the series name part of a programme name
    seriesmatch = []

    seriesmatch.append(re.compile('^(Late\s+Kick\s+Off\s+)'))
    seriesmatch.append(re.compile('^(Inside\s+Out\s+)'))
    seriesmatch.append(re.compile('^(.*?):'))
    match = None

    for s in seriesmatch:
        match = s.match(name)
        if match:
            break

where series_match() is called 100 times resulting in 300 re.compile() calls, use the following recipe:

Code:
re_series_match = [re.compile('^(Late\s+Kick\s+Off\s+)'), \
                   re.compile('^(Inside\s+Out\s+)'), \
                   re.compile('^(.*?):')]

def series_match(name):
    # match the series name part of a programme name
    match = None

    for s in re_seriesmatch:
        match = s.match(name)
        if match:
            break

and now there are only three calls to re.compile() no matter how many times series_match() is called.

It's unlikely that the above small changes will have a huge performance impact but they should be beneficial over the long run.

The profiling data may also reveal other behavioural aspects worthy of attention and improvement. Not just that, the data may also demonstrate that regex does not, as a rule, outperform the existing re package (either at all, or by a significant margin).

On the basis of this analysis I have submitted patches for iplayer and SportsDevil that attempt to eliminate the worst cases of repeated static pattern compilation.
Texture Cache Maintenance Utility: Preload your texture cache for optimal UI performance. Remotely manage media libraries. Purge unused artwork to free up space. Find missing media. Configurable QA check to highlight metadata issues. Aid in diagnosis of library and cache related problems.
Reply
#21
Here are some results for you (don't know how to avoid soft wrapping results - sorry):
Code:
newpi:~/downloads # ./analysis.sh -i /tmp/plugin.video.free.cable-beta.dat
Method         Freq   regex vs.   re   Avg +/- us  |                re (min/max/avg/total)              |               regex (min/max/avg/total)
compile         465      67 vs   398    -231.5408  |     61.0352 / 270032.8827 /  9334.2027 /  4.3404s  |     98.9437 / 327448.8449 /  9565.7436 /  4.4481s
findall (c)     129      57 vs    72    -710.9143  |     79.1550 /  25077.1046 /  4898.4760 /  0.6319s  |    107.0499 /  38685.7986 /  5609.3903 /  0.7236s
match (c)     16929    1928 vs 15001     -65.0333  |     41.9617 /  10202.8847 /   121.7473 /  2.0611s  |     46.0148 /  17414.0930 /   186.7806 /  3.1620s
search (c)     7248     313 vs  6935     -47.9230  |     25.0340 /  13221.0255 /    90.7388 /  0.6577s  |     43.8690 /  10245.8000 /   138.6618 /  1.0050s
split (c)       461     130 vs   331    -101.6615  |     56.9820 /   5759.0008 /   148.3455 /  0.0684s  |     63.8962 /   6919.1456 /   250.0070 /  0.1153s
sub            3304     503 vs  2801    -387.4857  |    180.9597 /  14863.0142 /   655.7624 /  2.1666s  |    243.9022 /  24329.1855 /  1043.2481 /  3.4469s
sub (c)         129     122 vs     7     314.9362  |     68.9030 /   3906.0116 /   485.8416 /  0.0627s  |     61.9888 /   2467.1555 /   170.9055 /  0.0220s
===========================================================================================================================================================
TOTAL         28665    3120 vs 25545      -2.9342s |                                           9.9887s  |                                          12.9229s

No PYEXIT events available - add "importhook.unload()" to the end of default.py in order to collect these events

Methods ending in (c)are compiled methods (good) - without (c) means the method (findall, sub etc.) has been called with a pattern requiring compilation
Reply
#22
Wrapping - I had to edit out a few blank columns to make it fit nicely Smile

Probably best to upload the output to pastebin, eg. "./analysis.sh -i /tmp/plugin.video.free.cable-beta.dat | pastebinit" and then post the url in future.

Could you also re-download timer_regex.py and analysis.sh as (about half an hour ago) I tweaked the way I distinguish between compiled methods and directly called methods that then require compilation (ideally, the latter should be avoided in favour of pre-compiled patterns).

As for your results, it's looking like a chunk of time (4.3-4.4s) is spent repeatedly compiling regular expressions, and also a chunk of time (2.1-3.4s) spent performing direct calls to re.sub() passing in a pattern (most likely static) that then needs to be compiled. Two quick optimisations (without seeing the code) would likely be ensuring the patterns are compiled only once, then re-using the compiled objects many times.

The addon spends quite a while matching patterns - fortunately it looks like the addon has compiled the pattern being matched once, then matched the compiled pattern many thousands of times. It's unlikely there's much of an optimisation here, other than not needing to do the match at all (for instance, if you know the string isn't likely to be present because of some other status or flag, don't perform the more expensive re.match()).

It's pretty likely that 5 seconds could be saved by optimising the re.compile()/re.sub() calls.

It's hard to say if this is a worthwhile improvement without knowing the total time spent in the addon, which is the purpose of the importhook.unload() call - if you're spending 20 seconds in the addon to rack up 10 seconds of regular expression calls, of which 5 seconds worth could be eliminated, then it's a worthwhile optimisation. Less so if you're spending 2 minutes in the addon to rack up 10 seconds of RE calls to then eliminate only 5 seconds.

If you run "./analysis.sh -i /tmp/plugin.video.free.cable-beta.dat -f" or "./analysis.sh -i /tmp/plugin.video.free.cable-beta.dat -e" you should see the most frequent and expensive compile() calls (ranked on regex time, use -r to rank on re time).

Add "-m <method>" eg. "-m sub" to see the most frequent/expensive re.sub() calls.

Overall, regex seems to be outperformed by re.
Texture Cache Maintenance Utility: Preload your texture cache for optimal UI performance. Remotely manage media libraries. Purge unused artwork to free up space. Find missing media. Configurable QA check to highlight metadata issues. Aid in diagnosis of library and cache related problems.
Reply
#23
Will do.

On a tangent - while testing this add-on, it occurred to me that a scrolling description of the selected channel continues to run during the interminable wait for the next menu to be displayed. According to top, the CPU is already running at 70% before clicking to open the next menu in the hierarchy, mostly from the pretty scrolling and fadeout. That's going to slow down navigation a bunch.
Reply
#24
Also to note the top answer on

https://stackoverflow.com/questions/4521...re-compile

suggests regular expressions are compiled and cached - although i'd have to check the re module code to see if this handles cases of using .compile as well as .match etc. Although this doesn't match up with the profiling results (If I understood them correctly).

I wonder in the real world how much difference this will make, when most of the time is spent doing an http get etc and caching data to disk etc.

I'm not arguing against optimisation of course, just wondering what the real benefit will be - I'd be surprised if we can speed the addon up by 4-5 seconds (I never noticed it being that slow on my pi either). That said, I'll be happy to implement at least some of your patch over at the iplayer ticket you made.

Cheers.
Reply
#25
(2014-03-27, 20:25)exobuzz Wrote: I wonder in the real world how much difference this will make, when most of the time is spent doing an http get etc and caching data to disk etc.
I don't see how that would be the bottleneck (absent the effect of processor related delay). Free Cable is unusable on a Pi. Click on a network (as in TV network like PBS) and wait over a minute for the next menu, listing shows, to open. This takes a couple of seconds on a desktop or notebook.

It seems more likely that the delay is due to super slow parsing, aggravated by most of the CPU cycles being wasted on the GUI while the addon is struggling to do its job.
Reply
#26
I was more specifically talking about iPlayer - haven't used those addons but that sounds pretty horrible. Sounds like Python performance on the Pi is a lot worse than the original Xbox (which can be a little sluggish for some plugins).

You are probably right regarding the parsing and lack of free CPU to run the python code etc. I would admit I've not used many plugins on my Pi though.
Reply
#27
(2014-03-27, 20:25)exobuzz Wrote: Also to note the top answer on

https://stackoverflow.com/questions/4521...re-compile

suggests regular expressions are compiled and cached - although i'd have to check the re module code to see if this handles cases of using .compile as well as .match etc. Although this doesn't match up with the profiling results (If I understood them correctly).

Yes, re (and regex) both have a pattern cache however looking up the cache still has a cost, and sometimes that cost can be quite high (although obviously this could be because the Pi is busy doing something else, and the addon is being starved of CPU time).

Then again it's not just a simple hashed pattern lookup (not in v2.7.3 anyway, maybe it is/was in v2.5) - it's a bit more complex than that as it needs to take the flag into consideration, and the pattern cache processing in the regex package is a lot more complex than it is in re. All of this may account for the higher than expected time required to find and return the previously compiled pattern object.

Here's one example demonstrating the difference the cache can make:
Code:
+       16117.0959 us    8687.9730 us   24805.0690 us   d.findall       16      <link rel="related" href=".*microsite.*title="(.*?)" />
+          96.7979 us     142.0975 us     238.8954 us   d.findall       16      <link rel="related" href=".*microsite.*title="(.*?)" />
+         137.8059 us     172.1382 us     309.9442 us   d.findall       16      <link rel="related" href=".*microsite.*title="(.*?)" />
+          97.0364 us     123.9777 us     221.0140 us   d.findall       16      <link rel="related" href=".*microsite.*title="(.*?)" />
+         103.9505 us     133.0376 us     236.9881 us   d.findall       16      <link rel="related" href=".*microsite.*title="(.*?)" />
+         115.8714 us     141.1438 us     257.0152 us   d.findall       16      <link rel="related" href=".*microsite.*title="(.*?)" />
+          10.0136 us     267.0288 us     277.0424 us   d.findall       16      <link rel="related" href=".*microsite.*title="(.*?)" />
+         119.2093 us     136.8523 us     256.0616 us   d.findall       16      <link rel="related" href=".*microsite.*title="(.*?)" />
+         117.0635 us     144.0048 us     261.0683 us   d.findall       16      <link rel="related" href=".*microsite.*title="(.*?)" />
+         108.0036 us     147.1043 us     255.1079 us   d.findall       16      <link rel="related" href=".*microsite.*title="(.*?)" />

(Note: The above timings were recorded on a 1GHz Raspberry Pi - the same events on a stock 700Mhz Pi would take longer)

In the above timings, the first time the pattern is seen it takes 8687us for the re package to compile the pattern, and a fairly hefty 24805us for the regex package to do the same thing.

On subsequent calls to re[gex].compile() for the same pattern, the internal caches are utilised and the cached pattern is returned relatively quickly, but still in the region of ~130us (re) and ~250us (regex). Which wouldn't be a problem, except that this particular pattern is being compiled potentially hundreds of times, and there are several others just like it, adding up to several thousand patterns being re-compiled and pulled from the cache at a cost of ~130us per cache lookup (when using re).

(2014-03-27, 20:25)exobuzz Wrote: I wonder in the real world how much difference this will make, when most of the time is spent doing an http get etc and caching data to disk etc.

You're absolutely right that any optimisations may still be swamped by other processing overhead, and that's where the elapsed timings (captured with the addition of importhook.unload()) can give an idea of how significant any potential saving might be.

However if it's possible to save even just 1% here, and another 1% there, for a relatively small code change, I'd have thought it would be worth it when you consider platforms such as the Pi really don't have much horsepower to spare! Smile

This investigation originally started as an investigation to determine if the regex package offered any performance benefits over the standard re package, and unfortunately that doesn't seem likely with regex regularly turning in slower times than re, particularly when compiling patterns (either new patterns, or previously cached patterns). I'm not sure what the official plans are for regex, but if it were to become a direct replacement for re as the standard regular expression package in Python, then any code that is frequently calling re.compile() may suffer as a result.

(2014-03-27, 20:25)exobuzz Wrote: I'm not arguing against optimisation of course, just wondering what the real benefit will be - I'd be surprised if we can speed the addon up by 4-5 seconds (I never noticed it being that slow on my pi either).

The 4-5 seconds is in relation to the plugin.video.free.cable-beta addon, based solely on the observed time spent compiling what looks like repeated patterns (unless the addon really does have 465 unique patterns that it needs to compile, but that seems unlikely).

In terms of shaving time from plugin.video.iplayer, we're probably talking about a second or so per user navigation since each navigation - eg. drilling down into a list - seems to re-process a lot of the same programme data each time.

(2014-03-27, 20:25)exobuzz Wrote: That said, I'll be happy to implement at least some of your patch over at the iplayer ticket you made.

Many thanks. Another potential optimisation would be to eliminate the 9 indirect pattern compilations occurring in listparser.py::parse() which result from the calls to re.findall() while iterating over entriesSrc (this includes the three patterns in episode_exprs). I can put this into another patch if you wish, although hopefully you might want to change the code yourself as the change is pretty straight forward. Smile
Texture Cache Maintenance Utility: Preload your texture cache for optimal UI performance. Remotely manage media libraries. Purge unused artwork to free up space. Find missing media. Configurable QA check to highlight metadata issues. Aid in diagnosis of library and cache related problems.
Reply
#28
(2014-03-27, 20:25)exobuzz Wrote: Also to note the top answer on

https://stackoverflow.com/questions/4521...re-compile

Confirming the observation from the third top answer:
Code:
rpi512:~ # python -m timeit -s "import re" "re.match('hello', 'hello world')"
10000 loops, best of 3: 62.1 usec per loop
rpi512:~ # python -m timeit -s "import re; h=re.compile('hello')" "h.match('hello world')"
100000 loops, best of 3: 12.7 usec per loop

Avoiding the compile()/cache lookup reduces the time required for the pre-compiled match() to 1/5th.

Simplifying this test in an attempt to isolate the cache lookup overhead:
Code:
rpi512:~ # python -m timeit -s "import re; h=re.compile('hello')" "re.compile('hello')"
10000 loops, best of 3: 35.1 usec per loop
suggests the time to return a simple pattern from the cache requires ~35 usec on average, which is probably a best case figure (1GHz ARM, re package, Python 2.7.3).

Just for fun, regex:
Code:
rpi512:~ # python -m timeit -s "import regex" "regex.match('hello', 'hello world')"
10000 loops, best of 3: 85.8 usec per loop
rpi512:~ # python -m timeit -s "import regex; h=regex.compile('hello')" "h.match('hello world')"
100000 loops, best of 3: 14.1 usec per loop
rpi512:~ # python -m timeit -s "import regex; h=regex.compile('hello')" "regex.compile('hello')"
10000 loops, best of 3: 53 usec per loop
Texture Cache Maintenance Utility: Preload your texture cache for optimal UI performance. Remotely manage media libraries. Purge unused artwork to free up space. Find missing media. Configurable QA check to highlight metadata issues. Aid in diagnosis of library and cache related problems.
Reply
#29
(2014-03-20, 16:42)popcornmix Wrote: There is some evidence that python regular expression matching is a cause of slow add-ons on the Pi. We may be able to do something about that, but first need some evidence.

First, what are the worst add-ons for speed on the Pi? I only use iPlayer which is quick enough.

Any examples where it takes 30 seconds or longer to browse through file lists?
Ideally these should be add-ons that are much faster on a PC (i.e. the slow part is definitely at the Pi end, rather than an overloaded server).
An add-on that takes ages to produce a small list is more interesting that one that takes ages to produce a massive list.

Describe what the add-on is, any special set up options, and exactly how you navigated and how long each step takes.

We can't discuss any piracy add-ons directly, so if you have information that seems relevant, then post some general information, and PM me the name of the add-on.
Examples of allowed add-ons (e.g. ones with thread on this forum) are obviously better.
Try to avoid geo-locked add-ons (unless they work from the UK).

I'd also be very interested to hear from any add-on authors who have used the Pi and have some ideas what the slow operations are.

At the moment, I just want to know about add-ons that work correctly but are slow. No reports about streams that don't start or otherwise unreliable add-ons.

The old livestreams addon is horribly slow when parsing a simple xml file on RPi.
A example list can be found in the librtmp thread: http://forum.xbmc.org/showthread.php?tid=162307
Reply
#30
(2014-04-17, 12:18)tuxen Wrote: The old livestreams addon is horribly slow when parsing a simple xml file on RPi.
A example list can be found in the librtmp thread: http://forum.xbmc.org/showthread.php?tid=162307

Could to try adding the profiling code to the livestreams addon and upload the profile data?
Texture Cache Maintenance Utility: Preload your texture cache for optimal UI performance. Remotely manage media libraries. Purge unused artwork to free up space. Find missing media. Configurable QA check to highlight metadata issues. Aid in diagnosis of library and cache related problems.
Reply

Logout Mark Read Team Forum Stats Members Help
Video addon performance on Pi0