Yup, updated to Gotham and the USTVnow channels scraped just fine. Everything seems to be in perfect working order
Thanks again for all your hard work bro. Can't wait for the donor features.
Channel Sharing...
Has anyone had any issues setting up Channel Sharing.
I've followed the directions regarding sharing but my client never syncs with the master.
I enabled sharing on the client and point the location to the master.
What am I missing.
I wanted to say that I haven't shared my playlists (with Path Substitution), but each instance has the same identical copies.
Not sure if that would cause an issue.
So, that Reddit channel modification I was working on:
I've hit a bit of a wall and was wondering if anyone might have any ideas?
I can easily get a list of YouTube videos for the channel, but XBMC needs the duration of each video, which is a problem.
I can use Youtube's API to get the duration details, but I have to make a request for each video. After about 100 requests the API starts refusing to answer because the script hits Youtube's API's quota. It also means that any further request (even for other youtube channels) will get refused as well.
There's a way to do batch requests, but that requires a developer key.. which means that all of PseudoTV's users will be on the same quota. That can't be good?
I could restrict the video limit... but, even then, if you were to refresh the channel list twice in quick succession everything would stop working since the quote would get hit -- which would end up confusing a lot of people.
Of course, the modification works great for me since I'm the only person using it and know what's up. But, I'd rather be able to get the modification added .
Anyone have any ideas?
(2014-05-11, 00:50)Lunatixz Wrote: [ -> ] (2014-05-11, 00:34)ianuk2005 Wrote: [ -> ]Quick question, I can't seem to find any info on this. For all my channels no matter whether it's a 3 hour movie or 15 minute show they all appear as 1 hour long in the epg. Is this normal??
No this is not normal, either your media is encoded with incorrect run times or your xbmc scrape incorrect info. Search fourm for how to fix.
All my media seems to be scraped correctly with the correct run times. The plugins i'm using are gomovies and gotv which pulls data from various sources before selecting a stream source. I'm not sure how pseudotv handles everything back end so i'm assuming that it's not receiving any run times from the plugin and defaulting? Is there any solution to this that will make it use the run times from the library info?
(2014-05-12, 20:51)ianuk2005 Wrote: [ -> ] (2014-05-11, 00:50)Lunatixz Wrote: [ -> ] (2014-05-11, 00:34)ianuk2005 Wrote: [ -> ]Quick question, I can't seem to find any info on this. For all my channels no matter whether it's a 3 hour movie or 15 minute show they all appear as 1 hour long in the epg. Is this normal??
No this is not normal, either your media is encoded with incorrect run times or your xbmc scrape incorrect info. Search fourm for how to fix.
All my media seems to be scraped correctly with the correct run times. The plugins i'm using are gomovies and gotv which pulls data from various sources before selecting a stream source. I'm not sure how pseudotv handles everything back end so i'm assuming that it's not receiving any run times from the plugin and defaulting? Is there any solution to this that will make it use the run times from the library info?
Are you using strms or direct plugin building?
(2014-05-12, 20:28)for-the Wrote: [ -> ]So, that Reddit channel modification I was working on:
I've hit a bit of a wall and was wondering if anyone might have any ideas?
I can easily get a list of YouTube videos for the channel, but XBMC needs the duration of each video, which is a problem.
I can use Youtube's API to get the duration details, but I have to make a request for each video. After about 100 requests the API starts refusing to answer because the script hits Youtube's API's quota. It also means that any further request (even for other youtube channels) will get refused as well.
There's a way to do batch requests, but that requires a developer key.. which means that all of PseudoTV's users will be on the same quota. That can't be good?
I could restrict the video limit... but, even then, if you were to refresh the channel list twice in quick succession everything would stop working since the quote would get hit -- which would end up confusing a lot of people.
Of course, the modification works great for me since I'm the only person using it and know what's up. But, I'd rather be able to get the modification added .
Anyone have any ideas?
Duration is available from youtube. Review my youtube code to learn how.
For those with a SiliconDust HDHomeRun Prime - I used a powershell script written by user wtg on the silicondust forums that I slightly modified. this will work on windows 7 or 8 while running powershell as admin.
Code:
<#
.Synopsis
Creates .strm files that can be used by XBMC to view live TV using a Silicon Dust HDHomeRun.
.Parameter device
If unspecified the script discovers your device id automatically, which should be fine unless you
have multiple HDHomeRuns. If you do have multiple devices, supply the ID of the device you wish
to generate strm files for.
.Parameter tuner
(Optional, default = 0) Supply a different tuner number if desired.
.Parameter outputDir
(Optional, default = %TEMP%) Directory to write the .strm files to. Supply another directory name
if desired, or a period for the current directory.
.Parameter LineUpXML
(Optional, default = "%ProgramData%\SiliconDust\HDHomeRun\Digital*.xml") If you have both a
"Digital Cable.XML" and "Digital Antenna.XML" scan file because you're using the original HDHomeRun
with both cable and antenna inputs - or have at least run a GUI scan from both sources at some
point - you'll need to specify the filename. By the default the script grabs whichever file it finds
in the Silicon Dust program data directory.
.Parameter pad
(Optional) Zero-pad the channel numbers to the specified number of digits.
.Parameter scan
Specify this option if you want the process to generate the files by running a command-line scan
and processing the results. If unspecified the file is generated via the XML file in which the
HDHomeRun stores the last gui scan results. It's much slower to force a scan and the stream files
won't be named as meaningfully.
.Example
.\gen_strms.ps1 -outputDir c:\temp
Generate streams for tuner 0 and write the files to c:\temp.
.Example
.\gen_strms.ps1 -tuner 1 -out c:\temp
Generate streams for tuner 1 and write to c:\temp. Parms don't need to be spelled out completely.
.Example
.\gen_strms.ps1 -out . -LineUpXML 'c:\ProgramData\SiliconDust\HDHomeRun\Digital Cable.xml'
Generate streams for tuner 0 and write files to the current directory, using the Digital Cable.xml
file.
.NOTES
Date Version Comments
-------- ------- -------------------------------------------------------
10/22/12 1.0 Initial release
11/08/12 1.1 Added ability to specify a scan file on the command line;
added error message if two XML scan files are found; added
tuner to the output file name; added support for padding
channel numbers; improved support for scan file.
#>
Param(
[string] $device = 'FFFFFFFF',
[string] $tuner = '0',
[string] $outputDir = "$env:temp",
[int] $pad = 1,
[string] $LineUpXML = "$env:ProgramData\SiliconDust\HDHomeRun\CableCard*.xml",
[switch] $scan = $false,
[string] $ScanFile = "")
# create a regex pattern to find invalid characters in a filename
$invalid_chars = "[{0}]" -f ([Regex]::Escape([System.IO.Path]::GetInvalidFileNameChars()-join''))
if ( $device -eq 'FFFFFFFF' ) {
# discover the device id, or the last one reported if more than one
& "$env:programfiles\silicondust\hdhomerun\hdhomerun_config.exe" discover | %{
switch -regex ($_) {
"device (\w+) found" { $device = $matches[1] }
}
}
write-host "Creating streams for device id $device"
}
if ( $scan -or $ScanFile -gt '' ) {
if ( $scan ) {
write-host "Starting scan - this can take a while..."
$scan_data = & "$env:programfiles\silicondust\hdhomerun\hdhomerun_config.exe" $device scan $tuner
} else {
Write-Host "Reading scan file $ScanFile..."
$scan_data = get-content $ScanFile
}
$scan_data | % {
switch -regex ( $_ ) {
"SCANNING.*:(?<chan>\d+)" { $chan = $matches.chan
write-verbose "Processing physical channel $chan"
break
}
"PROGRAM (?<PID>\d+): (?<prog>\d*\.*\d*)\s*(?<chan_name>.*)" {
$ProgID = $matches.PID
$prog = $matches.prog
$chan_name = $matches.chan_name
if ( $chan_name -eq "" ) { $chan_name = 'Blank' }
if ( $chan_name -match "(\(encrypted\)|\(control\)|\(no data\))" ) {
write-verbose "Channel $chan_name, program $prog is encrypted or has no data"
break
}
# write the data to the file.
if ( $chan -match '[.]' ) {
# The guide number has a period in it and thus shouldn't be padded
$number = $chan
} else {
$number = ("{0:d$pad}" -f [int]$chan)
}
$file = "$number-$ProgID ($tuner)-$chan_name.strm"
$file = Join-Path $OutputDir ([Regex]::Replace($file, $invalid_chars, ''))
write-host "Writing stream file $file."
"hdhomerun://$device-$tuner/tuner$tuner`?channel=auto:$chan&program=$ProgID" `
| out-file -FilePath $file -encoding "ASCII"
}
}
}
} else {
# Verify path to GUI listing file.
switch ( ([array](dir $LineUpXML)).count ) {
1 {
write-output "Loading HDHomeRun's lineup file from last GUI scan..."
$xml = [xml](get-content $LineUpXML)
}
{$_ -gt 1} { throw "More than 1 file exists matching $LineUpXML. Use -LineUpXML option to specify while file to use." }
default { throw "ERROR: Cannot find HDHomeRun's GUI's lineup file $LineUpXML. " }
}
Write-Output "Number of channels found: " $xml.LineUp.Program.Count
$xml.LineUp.Program | %{
if ($_.Enabled -eq 'true') {
if ( $_.GuideNumber -match '[.]' ) {
# The guide number has a period in it and thus shouldn't be padded
$number = $_.GuideNumber
} else {
$number = ("{0:d$pad}" -f [int]$_.GuideNumber)
}
$file = "$number ($tuner)-" + $_.Name + '.strm'
$file = Join-Path $OutputDir ([Regex]::Replace($file, $invalid_chars, ''))
write-host "Writing stream file $file."
"hdhomerun://$device-$tuner/tuner$tuner`?channel=$($_.Modulation):$($_.Frequency)&program=$($_.ProgramNumber)" `
| out-file -FilePath $file -encoding "ASCII"
}
}
}
(2014-05-12, 21:02)Lunatixz Wrote: [ -> ]Duration is available from youtube. Review my youtube code to learn how.
Oh, I can get duration for the first 100 videos. After that it starts failing.
Your code has the advantage of pulling the duration from a single API request (The whole playlist, the whole users' channel, etc). But, because the videos on Reddit don't have any relationship between them, I'm forced to get the duration one-by-one with an API request for each video, which eventually causes the API to start sending back 403 - Access Denied errors; and not just for my part of the script. All future Youtube requests will also fail...
(2014-05-12, 21:16)for-the Wrote: [ -> ] (2014-05-12, 21:02)Lunatixz Wrote: [ -> ]Duration is available from youtube. Review my youtube code to learn how.
Oh, I can get duration for the first 100 videos. After that it starts failing.
Your code has the advantage of pulling the duration from a single API request (The whole playlist, the whole users' channel, etc). But, because the videos on Reddit don't have any relationship between them, I'm forced to get the duration one-by-one with an API request for each video, which eventually causes the API to start sending back 403 - Access Denied errors; and not just for my part of the script. All future Youtube requests will also fail...
Yeah sounds like your hammering the api... what kind of list are you trying to parse? Can I see an example reddit list?
(2014-05-12, 21:36)Lunatixz Wrote: [ -> ]Yeah sounds like your hammering the api... what kind of list are you trying to parse? Can I see an example reddit list?
Yup, it's exactly that. The API won't let me dump 100 requests at it. Which isn't a problem for one or two channels. But, put 4-5 Reddit channels and suddenly it stops working.
There's nothing special about the Reddit list. I use feedparser and some regex to create a list of youtube videos. So, something like: ["a6I2izxeabY","UbvZt3CjNOg", etc]
Then I loop through them and grab the duration information from Youtube using their API URL:
http://gdata.youtube.com/feeds/api/video...3CjNOg?v=2
Which gets me the duration, but only for a single video per request. And it all works beautifully.. unless you really like Reddit channels. Then it silently fails when the API quota is hit. And not just for that one Reddit channel, but for all other Youtube channels as well.
Unlike playlists or user's sites, there's no way (that I can determine) to request data on a bunch of randomly chosen videos.
So... I'm stuck with a solution that works 90% of the time. And I'm pretty sure that having 10% of all users going "Why doesn't this work?" isn't a reasonable solution.
What's frustrating is that it's just missing the duration information. I can get all the other information I need (title, summary, etc) from Reddit. But, without the duration of the video I can't build the playlist.
(2014-05-12, 22:13)for-the Wrote: [ -> ] (2014-05-12, 21:36)Lunatixz Wrote: [ -> ]Yeah sounds like your hammering the api... what kind of list are you trying to parse? Can I see an example reddit list?
Yup, it's exactly that. The API won't let me dump 100 requests at it. Which isn't a problem for one or two channels. But, put 4-5 Reddit channels and suddenly it stops working.
There's nothing special about the Reddit list. I use feedparser and some regex to create a list of youtube videos. So, something like: ["a6I2izxeabY","UbvZt3CjNOg", etc]
Then I loop through them and grab the duration information from Youtube using their API URL:
http://gdata.youtube.com/feeds/api/video...3CjNOg?v=2
Which gets me the duration, but only for a single video per request. And it all works beautifully.. unless you really like Reddit channels. Then it silently fails when the API quota is hit. And not just for that one Reddit channel, but for all other Youtube channels as well.
Unlike playlists or user's sites, there's no way (that I can determine) to request data on a bunch of randomly chosen videos.
So... I'm stuck with a solution that works 90% of the time. And I'm pretty sure that having 10% of all users going "Why doesn't this work?" isn't a reasonable solution.
What's frustrating is that it's just missing the duration information. I can get all the other information I need (title, summary, etc) from Reddit. But, without the duration of the video I can't build the playlist.
You need to find a way to add the videos to a YouTube playlist. Then gdata the playlist which will have all relevant info in it.
https://www.google.com/search?safe=off&s...st+creator
@
Lunatixz or any other skinners who may be in the know, are there specific id's for skinning the individual show listing in the epg, as opposed to the entire row? Let me know if my question doesn't make sense.
I'm halfway done with a custom skin to match the Eminence skin.
(2014-05-12, 22:19)Lunatixz Wrote: [ -> ]You need to find a way to add the videos to a YouTube playlist. Then gdata the playlist which will have all relevant info in it.
I think that just moves the problem: Instead of being unable to create the channel, I'll be unable create the playlist because adding 100 videos to playlist is still 100 API calls. (Also, create the playlist under who's account? Create a PseudoTV user that has, literally, 1000s of playlists for each PsuedoTV Live user? Or force user to submit their youtube account details and have it create playlists on their account?)
The best solution I can think of is for me to create a youtube video that just says "Error: API Overload" and have it pass that video information back to PseduoTV Live when the API limit is hit. So, for the 10% of the time that it doesn't work for, it'll at least tell them why and they'll still have a working channel list. Then, hopefully, on the channel's next refresh, they won't hit the limit and those videos will eventually get populated. Not the prettiest of solutions... But, the view count on the video will also be a good indication of how often users are hitting the API limit.
(2014-05-12, 21:02)Lunatixz Wrote: [ -> ] (2014-05-12, 20:51)ianuk2005 Wrote: [ -> ] (2014-05-11, 00:50)Lunatixz Wrote: [ -> ]No this is not normal, either your media is encoded with incorrect run times or your xbmc scrape incorrect info. Search fourm for how to fix.
All my media seems to be scraped correctly with the correct run times. The plugins i'm using are gomovies and gotv which pulls data from various sources before selecting a stream source. I'm not sure how pseudotv handles everything back end so i'm assuming that it's not receiving any run times from the plugin and defaulting? Is there any solution to this that will make it use the run times from the library info?
Are you using strms or direct plugin building?
Strm files, content example:
plugin://plugin.video.GOtv/?action=play&name=2+Broke+Girls+S01E01&title=Pilot&imdb=1845307&tvdb=248741&year=2011&season=1&episode=1&show=2+Broke+Girls&show_alt=2+Broke+Girls&date=2011-09-19
(2014-05-12, 23:20)for-the Wrote: [ -> ] (2014-05-12, 22:19)Lunatixz Wrote: [ -> ]You need to find a way to add the videos to a YouTube playlist. Then gdata the playlist which will have all relevant info in it.
I think that just moves the problem: Instead of being unable to create the channel, I'll be unable create the playlist because adding 100 videos to playlist is still 100 API calls. (Also, create the playlist under who's account? Create a PseudoTV user that has, literally, 1000s of playlists for each PsuedoTV Live user? Or force user to submit their youtube account details and have it create playlists on their account?)
The best solution I can think of is for me to create a youtube video that just says "Error: API Overload" and have it pass that video information back to PseduoTV Live when the API limit is hit. So, for the 10% of the time that it doesn't work for, it'll at least tell them why and they'll still have a working channel list. Then, hopefully, on the channel's next refresh, they won't hit the limit and those videos will eventually get populated. Not the prettiest of solutions... But, the view count on the video will also be a good indication of how often users are hitting the API limit.
So instead of building a list that can be parsed correctly, you would rather api call each video until you get temp banned from the api, then right code to handle the exception? I wouldn't be interested in that...
If you have the video id's which you indicated you do. Then just should add the videos to a youtube playlist using python, Then parse the playlist for video data.
If I were you I would do something like this
http://stackoverflow.com/questions/21277...-in-python batch the job to only add 25 video id's at a time up to a limit set by the user (25|50|100|etc).
Then I would call the playlist with feedparser and get all information to populate channel with.
(2014-05-12, 23:24)ianuk2005 Wrote: [ -> ] (2014-05-12, 21:02)Lunatixz Wrote: [ -> ] (2014-05-12, 20:51)ianuk2005 Wrote: [ -> ]All my media seems to be scraped correctly with the correct run times. The plugins i'm using are gomovies and gotv which pulls data from various sources before selecting a stream source. I'm not sure how pseudotv handles everything back end so i'm assuming that it's not receiving any run times from the plugin and defaulting? Is there any solution to this that will make it use the run times from the library info?
Are you using strms or direct plugin building?
Strm files, content example:
plugin://plugin.video.GOtv/?action=play&name=2+Broke+Girls+S01E01&title=Pilot&imdb=1845307&tvdb=248741&year=2011&season=1&episode=1&show=2+Broke+Girls&show_alt=2+Broke+Girls&date=2011-09-19
okay, think I have an idea why it's not working for you... that plugin doesn't generate nfos... I'll add some code to deal with the problem...
My PVR channels won't load in PTVL, but they load okay in XBMC's LiveTV. Had them working yesterday before I messed them up by auto-tuning a video addon with super favorites... guess Tekzilla wasn't considered a folder in Rev3... doh!
Any idea how to get PVR channels back?
xbmc.log