MythLogBot@irc.freenode.net :: #mythtv

Daily chat history

Current users (80):

aloril, Anduin_, Anssi, anykey_, Beirdo, chainsawbike, Chutt, clever, coling, Cougar, dagar, Dave123, dekarl, dlblog, dudz_, eharris, f33dMB, foobum, foxbuntu, ghoti, Gibby, gigem, gregL, GreyFoxx, highzeth, iamlindoro, J-e-f-f-A, j-rod|afk, JamesHarrison, jams, jhp, joe_, jpabq, jpabq-, jpharvey__, jstenback, justinh, jwhite, kc, knightr, kormoc, kurre_, kwmonroe, mag0o, mike|2, MythBuild, MythLogBot, PointyPumper, poptix, purserj, rhpot1991, sailerboy, skd5aner, Slasher`, Snow-Man, sphery, sraue, stuarta, sutula, tgm4883, ThisNewGuy, tomimo, tris, Unhelpful, vallor, wagnerrp, wahrhaft, ybot, zCougar, _charly_, andreax, BeeBob, danielk22, zombor, MaverickTech, laga_, jmartens, jcarlos_, beata, kenni
Saturday, July 16th, 2011, 00:00 UTC
[00:00:08] Mousey (Mousey!~wtfisme@ross154.net) has quit (Ping timeout: 264 seconds)
[00:02:07] jmartens (jmartens!~jmartens@s5597ca60.adsl.wanadoo.nl) has quit (Quit: Leaving.)
[00:02:35] paul-h (paul-h!~paulh@5adce259.bb.sky.com) has joined #mythtv
[00:02:35] paul-h (paul-h!~paulh@5adce259.bb.sky.com) has quit (Changing host)
[00:02:35] paul-h (paul-h!~paulh@mythtv/developer/paul-h) has joined #mythtv
[00:03:04] paul-h_ (paul-h_!~Paul@mythtv/developer/paul-h) has quit (Remote host closed the connection)
[00:03:45] jya (jya!~jyavenard@mythtv/developer/jya) has quit (Quit: jya)
[00:04:16] paul-h: iamlindoro: I'm still working on it slowly as time and enthusiasm allows
[00:06:22] paul-h: the player will play mp3 podcasts without problems it's just there is no UI
[00:06:54] iamlindoro: paul-h: Glad it's still in progress-- is what you've done what's committed? I think there's a conception that you have a lot sitting locally, but that may be a mistaken one
[00:07:37] iamlindoro: (btw, I only ask out of curiosity, I'm very glad you're working on it and have no complaints about the pace)
[00:08:06] paul-h: I've actually been using the new UI from xmas :) I just need to pull my finger out and finish it
[00:08:15] Beirdo: it's good to have the info for when people ask :) I look forward to a reworked mythmusic
[00:09:53] iamlindoro: oh, awesome! If it's in any kind of usable state that's good news
[00:10:54] kormoc is now known as kormoc_afk
[00:11:13] Beirdo: iamlindoro: do you know if we have a generic JSON parser in the new protocol stuff?
[00:11:50] Beirdo: I'm thinking in the back of my head of using JSON between the weather grabbers and mythweather later (as the current parser is a bit crashy)
[00:11:51] iamlindoro: Beirdo: Parser? No, the protocol is purely concerned with output, not input
[00:11:56] Beirdo: K.
[00:12:13] iamlindoro: it'll turn the data structures into whatever you want, but not vice versa
[00:12:27] Beirdo: that's what I thought, but I figured you'd paid more attention to the progress there :)
[00:13:12] andreax (andreax!~andreaz@p57B941D8.dip.t-dialin.net) has quit (Read error: Connection reset by peer)
[00:13:34] iamlindoro: I think dblain had talked about maybe writing something to allow the use of the new API from the FE, but I'm not sure which serializer he intended to use for that... but obviously that would involve a parser of whichever type
[00:14:06] iamlindoro: And I'm not 100% whether he still intends to do so... though it would be nice to use the APIs directly in the FE rather than them just being reserved for outside bolt-ons
[00:14:08] Beirdo: yeah
[00:14:54] Beirdo: it's on my long list, so perhaps it will get further along before the itch needs scratching :)
[00:18:11] omaha (omaha!~omaha@216-15-2-147.c3-0.bth-ubr1.lnh-bth.md.cable.rcn.com) has quit (Ping timeout: 240 seconds)
[00:31:56] zombor (zombor!~zombor_@kohana/developer/zombor) has quit (Remote host closed the connection)
[00:38:47] zombor (zombor!~zombor_@kohana/developer/zombor) has joined #mythtv
[00:46:40] kormoc_afk is now known as kormoc
[01:04:56] davide_ (davide_!~david@mythtv/developer/gigem) has quit (Remote host closed the connection)
[01:05:22] davide_ (davide_!~david@host103.16.intrusion.com) has joined #mythtv
[01:05:22] davide_ (davide_!~david@host103.16.intrusion.com) has quit (Changing host)
[01:05:22] davide_ (davide_!~david@mythtv/developer/gigem) has joined #mythtv
[01:26:39] dblain: iamlindoro: If I ever find time, I still would like to see the API callable from the FE. The difficulty is that I want the client proxy class (used by the FE) to be autogenerated or at least as much as possible. I don't like the idea of having to maintain multiple pieces of code every time we add new functionality.
[01:27:10] iamlindoro: dblain: That makes sense
[01:27:31] iamlindoro: would definitely be nice to start transitioning to a single API, and I'm convinced the Services API is the future
[01:27:40] dblain: FWIW: if any of the service implementation is ever moved to a library (out of mythbackend), they can be linked to and used as standard c++ classes.
[01:28:18] dblain: Glad you like it. Wasn't sure how people would respond to it.
[01:30:56] dblain: Beirdo: My long term plan was to allow for deserialization of JSON since I was thinking it would be a nice to use from the FE.
[01:32:21] dblain: Have you though of using QScriptEngine (javascript function) to parse the JSON for you? It's really easy to use and since parsing weather data isn't time sensitive it may be the simplest solution.
[01:32:46] sphery_ is now known as sphery
[01:37:16] zombor (zombor!~zombor_@kohana/developer/zombor) has quit (Remote host closed the connection)
[02:13:57] gregL (gregL!~greg@cpe-74-76-125-87.nycap.res.rr.com) has joined #mythtv
[02:44:44] zombor (zombor!~zombor_@kohana/developer/zombor) has joined #mythtv
[02:46:48] zombor (zombor!~zombor_@kohana/developer/zombor) has quit (Remote host closed the connection)
[02:53:19] zombor (zombor!~zombor_@kohana/developer/zombor) has joined #mythtv
[03:15:11] zombor (zombor!~zombor_@kohana/developer/zombor) has quit (Remote host closed the connection)
[03:39:12] wagnerrp (wagnerrp!~Wagner@nr-ft1-66-42-241-137.fuse.net) has quit (Quit: Leaving)
[03:39:40] wagnerrp (wagnerrp!~wagnerrp_@mythtv/developer/wagnerrp) has joined #mythtv
[04:24:09] ybot_ (ybot_!~quassel@61.14.141.36) has quit (Read error: Connection reset by peer)
[04:24:27] ybot (ybot!~quassel@61.14.141.36) has joined #mythtv
[04:32:25] Beirdo: dblain: no, I hadn't gotten that far in my thoughts. That does sound like it coud be doable though
[05:01:43] xris: there isn't an easier C++ JS parsing lib out there?
[05:01:46] xris: simpler, that is...
[05:01:53] xris: rather than executing the javascript?
[05:08:42] beata_ (beata_!beata@she.hatesme.com) has quit (Read error: Operation timed out)
[05:10:26] Beirdo: you would think so
[05:13:51] beata (beata!beata@she.hatesme.com) has joined #mythtv
[05:17:22] xris: http://json.org/  :)
[05:17:34] xris: control-f, c++, enter
[05:17:48] Beirdo: heh
[05:17:58] dblain: I'm not saying it would be the best approach, just that it might work and wouldn't require a new dependancy.
[05:19:30] dblain: I was planning on writing a json parser to deserialize, but it would be driven by "datacontracts" i.e. QObject based classes with properties (same data classes the service api uses)
[05:19:37] Beirdo: http://qjson.sourceforge.net/
[05:19:47] Beirdo: hmmm, that looks like it might be useful
[05:20:20] wagnerrp: dblain: in regards to switching much or all of the internal communication over to the XML API, is that configured such that it could be used for a persistent connectin?
[05:21:09] dblain: Beirdo: That link does look interesting.
[05:21:10] wagnerrp: stuff like detecting what backends/mediaservers/jobqueues are available, and what frontends and recordings are active, requires a persistent connection
[05:21:16] wagnerrp: if nothing else, as a heartbeat
[05:22:38] dblain: wagnerrp: Currently it uses the http protocol to expose the services which is connectionless. But I designed the API to allow for any custom transport to be created.
[05:23:00] Beirdo: dblain: yah, seems it might save a bit of implementation if someone already bolted it onto Qt for us.  :)
[05:23:04] dblain: (Well the http 1.1 does allow you to keep the connection open which it does support)
[05:23:09] wagnerrp: ok, i know it was mentioned a while back the possibility of using it as a serializer with the existing protocol stuff
[05:26:35] dblain: wagnerrp: I haven't persued making the new service api the main protocol... years ago there was opposition due to potential performance issues with serializing/deserializing xml (the old code only worked for xml). I've been more trying to expose all needed methods in a way that COULD be used by the FE, directly linked and used, or called through a remote protocol (http currently).
[05:27:21] wagnerrp: dblain: there is still that opposition today
[05:27:56] wagnerrp: but if someone writes a binary serializer for it, i would be interesting to see the performance difference between that and the existing string list stuff
[05:28:01] Beirdo: it would be lessened with JSON (if done efficiently) I would think
[05:28:23] Beirdo: yeah, the current parsing isn't blazing fast either
[05:28:29] dblain: That's what I figured (I don't agree when it comes to metadata, guide data and the like, but I've resolved myself to being the "Thirdparty protocol"
[05:28:29] wagnerrp: especially considering the times you would actually need to worry about efficiency would be things like the PBB pulling the full list of episodes
[05:29:07] wagnerrp: the thing im really concerned about is the whole thing is build off introspection
[05:29:19] wagnerrp: does that happen during compiletime or runtime?
[05:29:25] wagnerrp: if the latter, what does that do for overhead?
[05:29:52] dblain: currently runtime. I haven't done performance testing.
[05:30:20] Beirdo: well, the cool thing with doing this stuff, there's always room for improvement :)
[05:31:00] dblain: The design does allow for compile time serialization code to be generated, I just haven't created the code generator yet (I was looking into something like that for the client side )
[05:31:52] dblain: Hard to justify development time on it when the current approach is fast enough for its current use.
[05:32:05] Beirdo: agreed
[05:32:24] Beirdo: the bang for the buck doesn't sound that huge
[05:33:32] dblain: I would like to see the service implementations that are currently in mythbackend be moved to a library(ies) so they can be used interanally so there would be a single implementation for its functionality.
[05:33:46] dblain: internally even.
[05:34:06] Beirdo: sometimes I wish I didn't record quite so much... Gotta wait until midnight to try this new support for --setloglevel
[05:34:32] dblain: I've resorted to using multiple VM's for development
[05:34:46] Beirdo: yeah
[05:34:51] xris: I think kormoc knows of a good c-based json decoder.
[05:34:55] xris: streaming, too
[05:35:10] Beirdo: I'm considering buying another HVR2250 so I can have something reasonable in a dev box
[05:35:32] Beirdo: xris: yeah, but I think the qjson one might fit our needs well
[05:35:52] dblain: HDHomeRun works good for me since it's accessible from the VM.
[05:35:59] Beirdo: much of the glue already being implemented rather than having to roll our own
[05:36:19] wagnerrp: dblain: at least the server side of those services implementations are being slowly migrated to libmythprotoserver
[05:36:23] Beirdo: Hmm, yeah a second HDHomeRun would work well too.
[05:36:43] Beirdo: and gives maximum flexibility too
[05:37:22] wagnerrp: between the base queries and the fileserver stuff
[05:37:23] dblain: wagnerrp: I haven't looked closely at libmythprotoserver.
[05:37:39] wagnerrp: ive got maybe a quarter of the code in mainserver.cpp duplicated in the library
[05:37:43] dblain: but it exposes them with just the mythprotocol.. right?
[05:37:54] ** xris wants to see one of these new hauppauge hdhr prime things **
[05:38:04] wagnerrp: its a re-implementation of the server, in a modular fashion
[05:38:09] dblain: it's the duplication part I don't like ... I'm sure you have a plan to remove that!
[05:38:27] wagnerrp: yeah, the final plan is to remove mainserver.cpp entirely
[05:38:53] dblain: do you forsee the http services moving there as well?
[05:39:02] wagnerrp: i had not planned so
[05:39:34] dblain: xris: I have a 6 turner hdhomerun prime on order... hoping it ships next week.
[05:39:52] wagnerrp: at the moment, its designed to mimic the existing protocol server
[05:39:56] wagnerrp: handling MythSocket objects
[05:40:01] xris: dblain: nice. that's more tuners than I need.
[05:40:15] xris: 3 would be plenty, but the hauppauge device is 2 tuners and costs half as much as the hdhr.
[05:40:15] wagnerrp: i havent looked at the http server stuff to know how closely it might tie in
[05:40:36] xris: and it *is* a prime. just usb-only.
[05:40:57] wagnerrp: and it is yet to be seen how drivers work, and how to use a tuning adapter
[05:40:58] dblain: I really liked the network connectivity. All my servers are in a rack in the basement.
[05:41:40] dblain: It also allows my kid to use a couple of tuners on their laptops.
[05:41:51] xris: wagnerrp: right. hence my hesitation
[05:42:36] xris: dblain: I'm trying to turn my mythbox into my one and only remaining server.
[05:42:48] dblain: wagnerrp: If the functions that implement the core mythprotocol could be turned into a class library, then we both could use the single implemenation.
[05:42:49] xris: doubt I'll get that far, but it's a dream...
[05:42:57] xris: dblain: that'd be sweet.
[05:43:23] xris: btw, anyone want a g+ invite?
[05:44:30] dblain: xris: My main server is an 8 core with 16 gig ram and 7TB raid 6... I run 6 VM's, so I'mn kind of on a single server already, the only issue I have is it very power hungry
[05:45:07] wagnerrp: yeah, ram doesnt really have an idle mode
[05:46:03] dblain: older tech too, which doesn't help (dual xeons with a 1000KW power supply)
[05:46:04] xris: heh. my main server is like 4G RAM, running 5 VMs on 300Graid1.  :)
[05:47:09] xris: I'm trying to reduce those, too. down to one mail domain with 2 users.. but still have a bunch of websites to host.
[05:47:15] xris: want to get down to a single IP, too
[05:47:53] wagnerrp: dblain: fbdimms?
[05:48:08] dblain: yes, with ECC
[05:48:16] wagnerrp: xris: sounds like mine, but theyre jails, not VMs
[05:48:27] wagnerrp: dblain: dont all fbdimms have ecc?
[05:48:35] dblain: could be.
[05:48:36] xris: wagnerrp: jails would be more efficient memory use
[05:49:05] wagnerrp: anyway, as it stands, the server in libmythprotoserver is designed to handle backend protocol communication
[05:49:08] xris: hoping 3T drive prices drop a little and I can upgrade the mythbox.. then those 2 1.5T drives can go in the server and I'll stop using VMs
[05:49:22] wagnerrp: meaning connection versioning, announcement, and all that stuff
[05:49:30] xris: wagnerrp: how's the frontend side of it?
[05:49:30] wagnerrp: but thats not to say that bit cant be modularized
[05:49:45] xris: that is to say, how close would we be to replacing mythproto with something more sane?
[05:50:10] wagnerrp: xris: at the moment, its a replacement in code, rather than one in protocol
[05:50:18] wagnerrp: it speaks the same thing
[05:50:27] dblain: wagnerrp: if you are interested in pulling the functionallity into classes/library, it would be a huge boost to getting the web services to expose all available functionality.
[05:51:14] dblain: Then the code in libmythprotocol would only deal with the specifics of the protocol/serialization...
[05:52:01] ** dblain needs to head to bed... good night everyone. **
[05:52:03] wagnerrp: at the moment, everything is broken up into one class for one chunk of commands
[05:52:12] xris: ah, ick
[05:52:24] ** xris spent the day redoing his work project that way. **
[05:52:30] wagnerrp: the base commands (uptime, memory usage, etc...) are one class
[05:52:37] xris: crazy functional programming multiple inheritance python goodness.  :)
[05:52:44] wagnerrp: so anything hosting a protocol server would load that object
[05:52:50] wagnerrp: the file server commands are in another class
[05:53:03] wagnerrp: so anything running a file server (backend or mediaserver) would load that class
[05:53:50] wagnerrp: and basically, you register a list of objects, based off what services you want to provide
[05:53:55] kenni (kenni!~kenni@mythtv/developer/kenni) has joined #mythtv
[05:54:15] wagnerrp: and when an query comes in, it scans through that list, sees if any of the modules wants to accept it, and otherwise returns an error
[05:55:45] wagnerrp: however, all of those classes are set up to the socket directly
[05:55:52] wagnerrp: rather than return a serialized response
[05:56:30] wagnerrp: but that would be an interesting change for future expandability
[06:32:35] Goga777 (Goga777!~Goga777@shpd-95-53-177-141.vologda.ru) has joined #mythtv
[06:45:36] Goga777 (Goga777!~Goga777@shpd-95-53-177-141.vologda.ru) has quit (Remote host closed the connection)
[06:59:09] dekarl_afk is now known as dekarl
[06:59:40] jmartens (jmartens!~jmartens@s5597ca60.adsl.wanadoo.nl) has joined #mythtv
[07:03:08] beata (beata!beata@she.hatesme.com) has quit (Read error: Operation timed out)
[07:08:23] beata (beata!beata@she.hatesme.com) has joined #mythtv
[08:12:35] andreax (andreax!~andreaz@p57B9363E.dip.t-dialin.net) has joined #mythtv
[08:15:39] ikonia (ikonia!~irc@unaffiliated/ikonia) has left #mythtv ()
[08:21:10] stoffel (stoffel!~quassel@p57B4D3BA.dip.t-dialin.net) has joined #mythtv
[08:42:31] kormoc (kormoc!~kormoc@mythtv/developer/kormoc) has quit (Ping timeout: 260 seconds)
[08:42:48] andreax (andreax!~andreaz@p57B9363E.dip.t-dialin.net) has quit (Read error: Connection reset by peer)
[08:50:40] kormoc_afk (kormoc_afk!~kormoc@mythtv/developer/kormoc) has joined #mythtv
[08:50:43] kormoc_afk is now known as kormoc
[08:52:18] dudz_ (dudz_!~dudz@123-243-44-131.static.tpgi.com.au) has quit (Remote host closed the connection)
[08:59:26] mrand (mrand!~mrand@ubuntu/member/mrand) has quit (Ping timeout: 240 seconds)
[09:09:05] mrand (mrand!~mrand@ubuntu/member/mrand) has joined #mythtv
[09:53:44] stoffel (stoffel!~quassel@p57B4D3BA.dip.t-dialin.net) has quit (Ping timeout: 250 seconds)
[10:02:38] kth (kth!~kth@unaffiliated/kth) has joined #mythtv
[10:03:28] kth (kth!~kth@unaffiliated/kth) has quit (Client Quit)
[10:05:02] mike|2 (mike|2!~mike@c-24-21-63-118.hsd1.or.comcast.net) has quit (Remote host closed the connection)
[10:05:58] mike|2 (mike|2!~mike@c-24-21-63-118.hsd1.or.comcast.net) has joined #mythtv
[10:49:14] stuartm: well radial gradients in QT suck, it only supports circles not ellipses
[11:27:10] jya (jya!~jyavenard@morsang.avenard.com) has joined #mythtv
[11:27:10] jya (jya!~jyavenard@mythtv/developer/jya) has joined #mythtv
[11:27:10] jya (jya!~jyavenard@morsang.avenard.com) has quit (Changing host)
[12:03:21] davide_ (davide_!~david@mythtv/developer/gigem) has quit (Remote host closed the connection)
[12:03:47] davide_ (davide_!~david@host103.16.intrusion.com) has joined #mythtv
[12:03:47] davide_ (davide_!~david@host103.16.intrusion.com) has quit (Changing host)
[12:03:47] davide_ (davide_!~david@mythtv/developer/gigem) has joined #mythtv
[12:15:30] stoffel (stoffel!~quassel@p57B4D3BA.dip.t-dialin.net) has joined #mythtv
[12:17:59] davide_ (davide_!~david@mythtv/developer/gigem) has quit (*.net *.split)
[12:17:59] jmartens (jmartens!~jmartens@s5597ca60.adsl.wanadoo.nl) has quit (*.net *.split)
[12:19:14] davide_ (davide_!~david@mythtv/developer/gigem) has joined #mythtv
[12:19:14] jmartens (jmartens!~jmartens@s5597ca60.adsl.wanadoo.nl) has joined #mythtv
[13:51:56] zombor (zombor!~zombor_@kohana/developer/zombor) has joined #mythtv
[14:13:22] zombor (zombor!~zombor_@kohana/developer/zombor) has quit (Remote host closed the connection)
[14:14:59] zombor (zombor!~zombor_@kohana/developer/zombor) has joined #mythtv
[14:22:23] danielk22: Beirdo: first thanks for doing a first stab at that QProcess->myth_system porting! Unfortunately one of the things I had fixed in the mythtv-rec2 branch was the timeout, and it looks like it has been reintroduced & at 30 seconds. Is there any harm in setting that at five or six hours instead?
[14:22:50] danielk22: Beirdo: That way it would effectively be infinite, but without actually needing code to support infinite timeouts.
[14:25:14] danielk22: Beirdo: oh, the backend not starting up problem is with a master backend when run from an init.d script (it happens about 50% of the time on my production box.)
[14:37:42] sphery: danielk22: does that happen only when you're running multiple mythtv systems (i.e. multiple separate master backends)?
[14:40:29] danielk22: sphery: Fair question, but there were no other master backends running.
[14:41:53] sphery: ok, just wondered--I know that when you have multiple, the UPnP search will find them and then tries to ask which to use in several cases. Anyway, I plan to rework the config.xml/mysql.txt/upnp search stuff on Monday, and that may fix it for you.
[14:42:48] danielk22: cool, is there a ticket #?
[14:43:06] stuartm: it would be nice to finally get rid of mysql.txt, I can't really remember why I never finished this when I started it a year ago
[14:43:43] danielk22: stuartm: I think we all decided to wait until the day after the release, and then promptly forgot.
[14:44:05] stuartm: danielk22: yeah, it probably was something like that
[14:44:46] stuartm: the patch is probably still around but might not apply any more
[14:50:33] andreax (andreax!~andreaz@p57B9363E.dip.t-dialin.net) has joined #mythtv
[14:51:14] zombor (zombor!~zombor_@kohana/developer/zombor) has quit (Remote host closed the connection)
[15:11:16] sphery: danielk22: heh, I'm going to ref http://code.mythtv.org/trac/ticket/7799 for it
[15:12:25] sphery: stuartm: yeah, the plan is to finally make config.xml the preferred config file--check for it first, and only if it's not there fall back to mysql.txt--and to actually use the info in whichever config file we use (including backend/db info) and only do a UPnP search if we still can't find the backend or database
[15:13:02] stuartm: any interest in my original patch if I can find it?
[15:13:03] andreax (andreax!~andreaz@p57B9363E.dip.t-dialin.net) has quit (Quit: Leaving.)
[15:13:08] danielk22: sphery: I think it's a good time to drop mysql.txt...
[15:13:16] sphery: right now, even if you have a mysql.txt/config.xml, we often do upnp searches, anyway
[15:13:32] sphery: I'm all for dropping mysql.txt :) I'd love to see the patch if it's easy to find, stuartm .
[15:13:44] sphery: would definitely give me a head start
[15:14:18] sphery: I have a general plan, but it's quite likely that your patch will help me find things I'm forgetting/haven't thought through
[15:15:35] stuartm: I've got three, all from September, let me just check which is which
[15:16:12] sphery: heh, cool. If you would prefer to push them, feel free. I won't be able to work on it until Monday (after parents' visit)
[15:16:35] sphery: If not, a link to the patch is fine--and I can update them and do some testing
[15:16:49] stuartm: sphery: I'm not sure whether they are complete
[15:17:14] sphery: ok, so a link would be fine
[15:18:10] stuartm: give me a few minutes, I'm trying to beat cups into submission atm
[15:18:36] sphery: heh, maybe I should check back in 8hrs, then :)
[15:22:21] zombor (zombor!~zombor_@kohana/developer/zombor) has joined #mythtv
[15:24:04] stuartm: heh, I've had to email a file to myself just to get it printed
[15:26:08] stuartm: when cups works it works well, when it doesn't it's almost impossible to work out why
[15:26:51] zombor (zombor!~zombor_@kohana/developer/zombor) has quit (Remote host closed the connection)
[15:31:44] stuartm: ok, the three patches seem to be same thing at different points in time so logically the most recent should have been the last
[15:32:32] danielk22: wagnerrp: "/usr/local/bin/mythcommflag -j 9628 --noprogress --verbose general,record,channel --loglevel info --quiet" this is what shows up as the commflag command when i'm getting those errors on inserting the keyframe map entries.
[15:36:52] stuartm: 6 hunks failed because of VERBOSE >> LOG changes :(
[15:39:00] sphery: stuartm: I'm happy to update it to current if you like
[15:47:42] stuartm: sphery: I've already started, some changes in Jan stomped over a few bits so I'm just figuring that out now
[16:14:21] stuartm: sphery: http://pastebin.com/FNybAqrv – Manually fixed conflicts but some of the changes may be out of step with other code, the updated patch is untested and this version doesn't drop mysql.txt it just prefers config.xml
[16:16:16] stuartm: it's definitely possible to go further than I did in that patch
[16:20:34] Dave123 (Dave123!~dave@cpe-74-74-200-106.rochester.res.rr.com) has quit (Quit: Leaving)
[16:24:03] Dave123 (Dave123!~dave@cpe-74-74-200-106.rochester.res.rr.com) has joined #mythtv
[16:34:06] andreax (andreax!~andreaz@p57B9363E.dip.t-dialin.net) has joined #mythtv
[16:40:08] zombor (zombor!~zombor_@kohana/developer/zombor) has joined #mythtv
[17:04:49] davide_ (davide_!~david@mythtv/developer/gigem) has quit (Remote host closed the connection)
[17:05:17] davide_ (davide_!~david@host103.16.intrusion.com) has joined #mythtv
[17:05:17] davide_ (davide_!~david@host103.16.intrusion.com) has quit (Changing host)
[17:05:18] davide_ (davide_!~david@mythtv/developer/gigem) has joined #mythtv
[17:24:58] jya (jya!~jyavenard@2a01:e35:2423:dd10:60c:ceff:fed2:908e) has joined #mythtv
[17:24:58] jya (jya!~jyavenard@2a01:e35:2423:dd10:60c:ceff:fed2:908e) has quit (Changing host)
[17:24:58] jya (jya!~jyavenard@mythtv/developer/jya) has joined #mythtv
[17:42:33] dudz_ (dudz_!~dudz@123-243-44-131.static.tpgi.com.au) has joined #mythtv
[17:59:19] RDV_Linux (RDV_Linux!~doug@CPE1caff7df6774-CM00252eac6f40.cpe.net.cable.rogers.com) has joined #mythtv
[18:00:02] RDV_Linux (RDV_Linux!~doug@CPE1caff7df6774-CM00252eac6f40.cpe.net.cable.rogers.com) has left #mythtv ("Ex-Chat")
[18:01:10] danielk22: Captain_Murdoch: I fixed the REC_PENDING breakage I had introduced. I also noticed there was a memory leak in the REC_PENDING handling so I added a little cleanup routine, but there may be a more elegant way to handle it.
[18:02:15] stoffel (stoffel!~quassel@p57B4D3BA.dip.t-dialin.net) has quit (Ping timeout: 258 seconds)
[18:27:48] mrand (mrand!~mrand@ubuntu/member/mrand) has quit (Quit: Leaving.)
[18:28:20] Captain_Murdoch: danielk22, thx. I was going to say that I'd be fine with it being converted to a single alert X seconds before recording, but if you've fixed it already...
[18:40:37] stoffel (stoffel!~quassel@p57B4D3BA.dip.t-dialin.net) has joined #mythtv
[18:50:13] stoffel (stoffel!~quassel@p57B4D3BA.dip.t-dialin.net) has quit (Remote host closed the connection)
[18:56:45] J-e-f-f-A (J-e-f-f-A!~J-e-f-f-A@unaffiliated/j-e-f-f-a) has quit (Ping timeout: 255 seconds)
[18:59:27] RDV_Linux (RDV_Linux!~doug@CPE1caff7df6774-CM00252eac6f40.cpe.net.cable.rogers.com) has joined #mythtv
[19:05:37] RDV_Linux (RDV_Linux!~doug@CPE1caff7df6774-CM00252eac6f40.cpe.net.cable.rogers.com) has left #mythtv ("Ex-Chat")
[19:21:19] stuartm: iamlindoro: ah, ignore my comment on github just now, I hadn't caught up to the commit where you reverted the change :)
[19:22:05] Beirdo: danielk22: it could well be set to a long timeout, but why would we need a 5–6h timeout on a channel change?
[19:22:50] Beirdo: that seems unreasonably long
[19:24:45] Beirdo: I'd think that if you can't change the channel in a period of about 30s (maybe a touch longer) than your recording source is of absolutely no use to mythtv as you'll never be able to record something timely on a schedule
[19:26:17] danielk22: Beirdo: Sure you can, you just set a 5 min start early time if you use something like DishNet which takes a long time to power on the STB and get it tuned.
[19:26:19] pheld (pheld!~heldal@cl-5.osl-01.no.sixxs.net) has joined #mythtv
[19:26:52] stuartm: double whatever figure you first think reasonable to allow for unusual delays caused by a perfect storm of events such as high load and devices which were left in standby
[19:27:20] Beirdo: hmmm, well a tuning time of a few minutes would be fine for cases like that, I suppose :)
[19:27:44] danielk22: Beirdo: The reason for it being longer than say 20 minutes is that you might want to still record the Olympic tryouts even if it takes a long time to get a signal.
[19:29:01] Beirdo: well, really, if you're using a satellite receiver, you should never be powering it off if you want to schedule recordings from it, but I see your point
[19:29:11] Beirdo: how about we set it to 30min for now?
[19:29:24] danielk22: While there might be an argument for even larger blocks for something like the olympics, I think 5–6 hours is twice an daily recording that I might care about even if I don't get the whole thing (C-SPAN).
[19:29:33] danielk22: Beirdo: What is the downside to a longer timeout?
[19:30:07] Beirdo: Hmmm, I dunno
[19:30:30] Beirdo: I still don't see the point in normal use
[19:30:44] Beirdo: yeah, there might be a few off-the-wall uses for long timeouts
[19:31:42] Beirdo: we can put it to hours, but really, that will be of an advantage to a very small handful of people. Most of us will want it to fail and fail fast so the recording can be rescheduled
[19:31:52] Beirdo: but sure
[19:32:18] Beirdo: I don't see any harm in it. Don't see much advantage either, but yeah, if you want a super-long timeout, feel free :)
[19:33:47] davide_ (davide_!~david@mythtv/developer/gigem) has quit (Remote host closed the connection)
[19:34:15] davide_ (davide_!~david@host103.16.intrusion.com) has joined #mythtv
[19:34:15] davide_ (davide_!~david@host103.16.intrusion.com) has quit (Changing host)
[19:34:15] davide_ (davide_!~david@mythtv/developer/gigem) has joined #mythtv
[19:34:15] danielk22: Beirdo: Code to handle timing out the recording needs to take pre-roll into account which this can't. Plus, I thought I had already committed code for that, maybe I forgot..
[19:35:22] brfransen (brfransen!~brfransen@216.254.250.47) has quit (Ping timeout: 246 seconds)
[19:35:44] Beirdo: I *think* (doublechecking right now), you can just take the 30 out
[19:35:58] Beirdo: and it would become infinite (i.e. never timeout)
[19:36:04] kwmonroe (kwmonroe!~kwmonroe@32.97.110.58) has quit (Ping timeout: 246 seconds)
[19:36:41] Beirdo: yup. As long as the wait calls are set to (0) as well
[19:37:13] Beirdo: I'll test it with no timeout
[19:37:18] danielk22: Do the wait calls actually block or just report on the current status?
[19:37:38] Beirdo: reports the current status unless you tell it to wait for a certain amount of time
[19:37:59] Beirdo: then it will block for up to that long and report status
[19:38:15] danielk22: got it. thx.
[19:39:13] Beirdo: ooh, wait.
[19:39:23] Beirdo: no, crap
[19:39:39] Beirdo: if timeout = 0, it will block unless it's finished.
[19:39:41] Beirdo: hmmm.
[19:39:50] kwmonroe (kwmonroe!~kwmonroe@32.97.110.58) has joined #mythtv
[19:39:58] Beirdo: I think we need another case for timeout -1 where it never blocks
[19:40:28] Beirdo: although that can be affected by adding to run it in the background, which would be the simplest
[19:41:04] Beirdo: so add | kMSRunBackground to the flags, and it will act that way
[19:41:31] Beirdo: which sounds like what we'd want here for that behavior
[19:43:49] Beirdo: just compiling for a test run here :)
[19:45:12] danielk22: xris: FYI I'm not seeing the timeline view in trac...
[19:46:39] Beirdo: danielk22: seems to be functional, would you like me to commit it?
[19:46:54] Beirdo: I have it set to run in the background, no timeout
[19:47:22] danielk22: Beirdo: sure
[19:49:02] Beirdo: done
[19:49:08] danielk22: Am I understanding correctly that GetScriptStatus() is currently blocking for the script to exit but you are fixing it so it will just check the script status in addition to the timeout thing?
[19:49:22] stuartm: danielk22: the timeline view in trac was disabled because we're using a version of trac which doesn't work well with git, it rebuilds the timeline from the git repo every time it's viewed and that brings the server to it's knees
[19:49:24] Beirdo: correct
[19:49:40] Beirdo: by running it in the background, calls to Wait() will not block
[19:49:47] Beirdo: they return current status, and that is all
[19:50:08] danielk22: stuartm: even on the new server? + can we show a timeline view without commits then?
[19:50:29] danielk22: Beirdo: cool, that's exactly what we want there.
[19:50:57] Beirdo: yeah, I realize now that I shoulda had it in the background for sure
[19:51:03] Beirdo: timeout or not :)
[19:51:07] stuartm: new server is underpowered compared to our own server, or has xris already moved us back to our hardware?
[19:51:28] wagnerrp: the new server is actually more powerful
[19:51:33] wagnerrp: but it has a quarter the memory
[19:51:46] wagnerrp: so too much load and you quickly run into swap
[19:52:12] Beirdo: OK, so I have my new HDHR3... and the KVM
[19:52:20] stuartm: right but memory is more important for most of the stuff we're running
[19:52:24] Beirdo: I should move some computers into the office today
[19:52:36] danielk22: stuartm: I don't know. I ref'd xris earlier cuz I thought disabling the feature might have been a load shedding thing. But it is an insanely useful feature.
[19:53:06] Beirdo: stuartm: yah, we need to reformat the old server. Which is why xris was asking if we all have our stuff off the machine
[19:53:19] stuartm: danielk22: I'm not sure we'd be able to enable it whatever hardware we were running, but someone might be able to hack the code to allow a timeline view without the commits
[19:53:47] stuartm: or we could look again at the latest trac and see if they've fixed the problem
[19:53:56] Beirdo: alternatively we could spend some time on getting it to cache the git results so it's not parsing the whole bloody thing every time
[19:54:22] Beirdo: need python-fu for that
[19:54:24] danielk22: even without the commits it would be useful. but i wonder if we couldn't just have it generate the thing from a local git clone.. it doesn't need to be up to the second..
[19:54:43] stuartm: Beirdo: right, I wasn't sure if he'd gone ahead with the format and loaded the image already
[19:54:56] Beirdo: it is using a local clone. but it parses git log all the way back, IIRC
[19:55:01] Beirdo: ahh
[19:55:25] Beirdo: we could hack the code to always disable commits from the list, I guess
[19:55:37] Beirdo: still have the tickets in there... until we fix the core issue
[19:56:06] stuartm: i.e. on Monday he was "about to reimage" and it's now Saturday
[19:56:07] xris: new server is way more powerful than our own.... slightly less RAM, but much better cpu at rackspace.
[19:56:32] xris: stuartm: I sent the "please reformat" email to OSU yesteday
[19:56:42] wagnerrp: slightly less...
[19:56:43] xris: wanted to give people plenty of time to respond
[19:56:46] stuartm: xris: 'slightly'? That's understatement for comical effect right?
[19:56:48] wagnerrp: it has 4GB versus the old 16GB
[19:56:52] xris: 8G vs 12G?
[19:57:00] stuartm: 4GB
[19:57:00] Beirdo: I'm hoping to talk my bosses into letting me get some old equipment from work to give to mythtv
[19:57:05] stuartm: vs 16GB
[19:57:14] xris: ok, bad memory on my part, then...
[19:57:22] Beirdo: it might be a while though
[19:57:54] Beirdo: I know we have some older boxes already that are piled as "junk"
[19:58:23] Beirdo: and we are moving everything to big iron with virtualization, so there should be hardware eventually
[19:58:50] stuartm: theres the question of how much space etc OSU would be willing to give us
[19:58:54] wagnerrp: big iron... a z9?
[19:59:04] Beirdo: and I know Captain_Murdoch is working on a similar score
[19:59:10] xris: wagnerrp: ec2, probably
[19:59:27] Beirdo: wagnerrp: as a generic term... racks and racks of IBM or HP or Dell gear
[19:59:33] Beirdo: they haven't decided yet
[19:59:47] Beirdo: KVM for Linux, VMWare for Windows
[19:59:52] superm1 (superm1!~superm1@ubuntu/member/superm1) has quit (Ping timeout: 246 seconds)
[20:00:11] stuartm: right now we're taking up 2U (iirc) and a single port? on a switch, more hardware would need negotiation I'd imagine
[20:00:14] Beirdo: the guy architecting it is ex-Amazon (and gets on my nerves daily)
[20:00:43] Beirdo: stuartm: that is a good point. We'd need to know how much space/power/etc is available
[20:00:45] wagnerrp: when i think big iron, i think a giant shared memory system, with hardware level redundancy built into the architecture
[20:01:10] Beirdo: wagnerrp: not that big, sorry... shoulda said "bigger iron" :)
[20:01:56] wagnerrp: not-so-big-iron would be software level redundancy, a la 'cloud'
[20:03:03] richardjprice (richardjprice!~richard@188-220-144-250.zone11.bethere.co.uk) has joined #mythtv
[20:03:14] Beirdo: I think they were talking machines with 20 socketed CPUs and mucho RAM
[20:03:35] superm1 (superm1!~superm1@204.8.45.13) has joined #mythtv
[20:03:35] superm1 (superm1!~superm1@ubuntu/member/superm1) has joined #mythtv
[20:03:35] superm1 (superm1!~superm1@204.8.45.13) has quit (Changing host)
[20:03:43] Beirdo: I try to ignore that guy as much as I can
[20:04:27] richardjprice: hello, which distro would you run mythtv on, if you were planning on setting up mythtv on a new htpc, in the next 10 minutes
[20:04:55] wagnerrp: richardjprice: not sure, but i would probably ask over in #mythtv-users
[20:05:08] richardjprice: oops, wrong room? sorry
[20:05:35] richardjprice (richardjprice!~richard@188-220-144-250.zone11.bethere.co.uk) has left #mythtv ()
[20:07:27] xris: Beirdo: osu has room. Captain_Murdoch's servers were 2 x 4u, I think, and they had room for those.
[20:07:47] xris: if mchx has virt-capable 64bit gear, I'd much rather have that stuff.
[20:08:36] Beirdo: yeah, I think the stuff in the pile now are the ex-asterisk boxes... 4-way opteron SiMech boxes IIRC
[20:08:39] Beirdo: 1U
[20:08:57] cesman (cesman!~cecil@pdpc/supporter/professional/cesman) has quit (Remote host closed the connection)
[20:09:40] Beirdo: dunno if they plan on putting them in the official bonepile, or if they will recycle straight from the data center, or what
[20:09:48] Beirdo: which is why I need to ask the boss-man
[20:10:06] Beirdo: if they are on the bonepile, they are free to take
[20:11:15] Beirdo: anyways, I think I'll go start hooking up stuff in my office. Be back in a bit to look at code
[20:19:46] xris: well, Tim's a pseudo mythtv fan, so he might help. marc, too.
[20:20:06] xris: (not that marc has help-pester-the-boss influence)
[20:34:23] Beirdo: heh, yeah
[20:34:51] Beirdo: OK, 4-way USB KVM is in place. Now I need to move 3 more computers in
[20:44:08] xris: that should have been "has more than ..."
[20:49:14] Beirdo: yeah, I read it that way
[20:51:21] Beirdo: wow, that machine turns off fast. from shutdown command to power off of about 5s
[21:02:01] stuartm: iamlindoro: how is this new metadata stuff supposed to work? I've a recording here that it seems unable to discover the series/ep for yet that information is already in the database from the xmltv source
[21:07:02] zombor (zombor!~zombor_@kohana/developer/zombor) has quit (Remote host closed the connection)
[21:07:18] Unhelpful (Unhelpful!~quassel@rockbox/developer/Unhelpful) has quit (Ping timeout: 276 seconds)
[21:07:32] Guest57722 (Guest57722!~andy@user-0cej14o.cable.mindspring.com) has joined #mythtv
[21:07:33] sphery: stuartm: thanks for the patch--got it downloaded and will look at it Monday
[21:10:12] stuartm: and for films it's not using the year already in the database?
[21:10:14] Guest57722: Trying to compile mythtv backend only on nexenta, first try to run ./configure complains about qmake not found. I don't want to install anything the backend won't need and I haven't look into the code yet. Anyone got any advice to compile the backend only in general?
[21:10:20] danielk22: sphery: have you seen the mythtv-users' posts "MythFrontend for Windows – Problem with mismatched TimeZone" ?
[21:11:15] sphery: heh, not yet--but that's unusual since all the timezone stuff is #if'ed out for Windows
[21:13:32] sphery: I'm hoping they're on master and someone just accidentally enabled the check in unstable. I'll take care of it, though. Thanks for the heads up.
[21:13:57] sphery: Guest57722: /topic (you want #mythtv-users )
[21:14:28] Guest57722: ok. Thanks.
[21:14:42] Guest57722 (Guest57722!~andy@user-0cej14o.cable.mindspring.com) has quit (Quit: Ex-Chat)
[21:17:19] Unhelpful (Unhelpful!~quassel@rockbox/developer/Unhelpful) has joined #mythtv
[21:19:38] iamlindoro: stuartm: Regarding your comment in the source-- are you catching up on commits, perchance? I fixed that, it was a misunderstanding on my part
[21:20:11] iamlindoro: stuartm: Suggest reading the documentation on the metadata stuff: http://www.mythtv.org/wiki/Enhancing_Recordin . . . adata_Lookup
[21:20:53] wagnerrp: yeah, he retracted the comment in here when he read further into the commits
[21:21:07] boringuy (boringuy!~andy@user-0cej14o.cable.mindspring.com) has joined #mythtv
[21:21:41] iamlindoro: oh
[21:23:44] iamlindoro: stuartm: The short, short answer is: Once you have loaded inetrefs into all your recording rules (can be automatic with some possibility of error, or manual in the SchedEdit screens), all recordings going forward will inherit the inetref and perform lookups at the start of recordings to add season and episode (and in the future, "whatever else")
[21:24:13] iamlindoro: stuartm: But I don't understand "that information is already in the database from the xmltv source"
[21:24:20] iamlindoro: so you'll need to explain what you meant by that to me
[21:24:47] stuartm: iamlindoro: yeah I'm working through two weeks of commits ... slowly
[21:26:18] iamlindoro: additionally, even recordings *without* an inetref will still attempt a lookup, it'll just be a lot more accurate with one
[21:26:56] stuartm: iamlindoro: in my program/recorded tables I have season, episode and year info but the lookup seems to ignore that information? e.g. When searching for "Raiders of the Lost Ark" two results are returned and it can't tell which one to use even though the 'year' should make it clear
[21:27:17] iamlindoro: stuartm: Year is not a hint
[21:27:37] iamlindoro: title, subtitle, season, episode, and inetref are the only mechanisms for matching
[21:28:03] stuartm: similarly when using the manual search it's pre-populating the series/episode for most series, but some it just shows series 1 ep 1 even if the database includes the correct season/episode info
[21:28:23] iamlindoro: correct, 1x1 is the "dummy" indicator to the code that it's TV
[21:28:24] stuartm: Castle is a good example for that
[21:28:53] stuartm: iamlindoro: huh, then I wonder why it's showing something else for the majority of series
[21:29:13] iamlindoro: when it can't match title and subtitle, it will set it to 1x1 in the rule since the rule season and episode are irrelevant except for purposes of telling the code that recording based on it should be looked up with the TV grabber first
[21:29:30] iamlindoro: stuartm: I'm sorry, you're really not being clear to me and I'm having a hard time understanding you
[21:29:48] iamlindoro: "showing something else for the majority of the series" is unclear to me
[21:30:16] iamlindoro: if it can match the title + subtitle, or even better, inetref + subtitle, it will set the correct season and episode in the rule
[21:30:46] iamlindoro: if it can only match the *title*, but determines it's a TV series, it sets season and episode to 1x01 to allow the code to use the rule as a reference point for shows recorded based on that rule
[21:31:06] iamlindoro: thus, a lookup for a recording based on your "Castle" rule will know to use the TVDB grabber first
[21:31:38] iamlindoro: regarding Raiders of the Lost Ark, you are likely using some theme which lacks the new base image and metadata multiresult windows
[21:32:00] stuartm: ok ... I guess I'm not really following why it ignores season/ep
[21:32:12] iamlindoro: What "it?" m Ignores how?
[21:32:33] stuartm: iamlindoro: don't worry about it, I sense this conversation going nowhere good
[21:32:40] iamlindoro: Are you trying to set season and episode manually on the recording rule?
[21:33:08] iamlindoro: and wondering why it it doesn't use that when you hit "perform query?"
[21:33:43] iamlindoro: stuartm: I am happy to explain it all, I'd rather you not just write off the whole functionality because I'm being irritable (just got back from a 100 mile bike ride), I just need you to use your words ;)
[21:34:49] stuartm: no, they are already in the database, inserted by the xmltv grabber and what you seem to be saying is that the metadata lookup pays no attention to that, choosing to use the subtitle instead?
[21:35:27] iamlindoro: stuartm: So you're saying that your grabber is populating the season and episode columns? Or another column?
[21:35:55] iamlindoro: If the season and episode columns, I am a little skeptical, unlike nick is insanely on the ball with the radio times grabber
[21:36:05] stuartm: I guess I just found the UI confusing since it was indicating it didn't know the season/ep by showing 1x1 even if the information is there
[21:36:16] iamlindoro: there in which column?
[21:37:45] stuartm: syndicatedepisodenumber: E7S1
[21:37:50] iamlindoro: if syndicatedepisodenumber, which I understand some xmltv grabbers populate, that data is totally bogus here
[21:37:53] iamlindoro: I can't use it
[21:37:54] kth (kth!~kth@unaffiliated/kth) has joined #mythtv
[21:38:13] iamlindoro: And trying to do so will just throw things way off
[21:38:24] kth (kth!~kth@unaffiliated/kth) has quit (Client Quit)
[21:38:43] iamlindoro: Since some of the information here *is* somewhat analogous to season and episode, others are a serialized order, some are just a random string of characters
[21:39:18] stuartm: iamlindoro: right, for xmltv we use a standard format, I wasn't aware that SD inserted whatever junk it wanted into that field
[21:39:54] stuartm: I guess I can fix the xmltv parser to put that into a series/episode column instead
[21:40:00] iamlindoro: :(
[21:40:14] iamlindoro: I mean, I guess if you must
[21:40:28] stuartm: why wouldn't we want to use it?
[21:40:42] iamlindoro: It just introduces a new pile of complexity that I have to worry about in the grabber classes, which are already insanely complicated
[21:40:53] iamlindoro: I won't stop you, I'm just uncomfortable with the idea
[21:41:03] stuartm: I'm lost ...
[21:41:24] iamlindoro: Your airing orders are not necessarily the same as ours
[21:41:31] iamlindoro: nor are the necessarily the same as those in France
[21:41:34] iamlindoro: or Hungary, etc.
[21:41:48] iamlindoro: Or maybe your broadcaster decides to use the DVD order
[21:41:51] stuartm: true ...
[21:42:02] iamlindoro: so now, when I go to try to pull in more data, I insert a load of garbage into someone's database
[21:42:29] iamlindoro: versus simply letting the grabber do its work will allow for a more or less universal behavior
[21:42:52] iamlindoro: Since the lookup in the rule isn't tied to any particular airing order
[21:43:21] stuartm: although it's even more confusing therefore to show something completely different to the _actual_ air order in the UI
[21:44:15] stuartm: so it's fine to store something consistent in a hidden db field and show something else to the user – i.e. don't show the grabber determined season/ep in the UI ignoring info provided by the local guide source
[21:45:31] iamlindoro: Any user who doesn't like the functionality should simply turn it off... but those users have spent the last three years becoming accustomed to the results returned by the TVDB and TMDB
[21:45:35] stuartm: you're completely right when you say this is complicated
[21:46:25] iamlindoro: If the users don't like the results, they can turn it off. If they want to be provided with localized season and episode numbers, which is fine by me, they should lobby the metadata sources to reorder season and episode numbers when requested for that locale/language
[21:47:05] iamlindoro: But I am not comfortable with inserting data that is of questionable veracity into the DB, and having it potentially not work, or even worse, scres up the data inserted, further down the chain
[21:51:12] iamlindoro: Anyway, if any user, anywhere, is unhappy with TVDB not returning localized ordering, it's not something that a single person has ever mentioned. I was very, very careful to try to avoid the many downfalls of JAMU when it came to being automated to the point of inaccuracy... inserting data that wasn't provided by the source at which we look it up... well, I might as well not have bothered
[21:51:22] sphery: stuartm: FWIW, (though this isn't really important, anymore) "whatever junk" SD puts in the syndicatedepisodenumber field is the syndicated episode number--the official identifier chosen by the production company.
[21:51:42] sphery: so SD isn't just making stuff up.  :)
[21:51:43] stuartm: it might not be pretty but I'm starting to think we need additional season and episode fields, localseason/localepisode – I like the idea of showing season/episode information in the UI but I'd personally rather it was the _air_ order
[21:51:55] danielk22: iamlindoro: stuartm: I'm pretty sure the way data sources strive to do season and episode order is by the first worldwide airing.
[21:52:52] danielk22: The order that shows were intended to be aired or the order they are re-aired in syndication may be completely different.
[21:55:00] stuartm: the problem here is that the 'internal' episode numbering which we'd need to match the metadata sources doesn't necessarily reflect what the user is going to find useful – if my recordings jumped from episode 5 to 7 I'm going to first assume that the recorder screwed up and failed to catch #6
[21:55:38] pheld (pheld!~heldal@cl-5.osl-01.no.sixxs.net) has quit (Quit: Leaving.)
[21:56:01] iamlindoro: stuartm: I won't object if you want to parse or show the syndacatedblahblahblah, just so long as under no circumstances is it used in any fashion in the metadata classes
[21:56:15] stuartm: iamlindoro: fine
[21:57:00] iamlindoro: stuartm: surely you can see my point, right? The data which is used for lookups *needs* to have been provided by the source at which we are doing the lookups. It's the only way to maintain integrity in all this stuff
[21:57:18] stuartm: iamlindoro: I think I just said _exactly_ that
[21:57:43] iamlindoro: I hadn't read it that way, but sure, glad you agree
[22:01:54] stuartm: going back to what was said earlier, just so I'm clear, there are no plans to use year in the search? If it's too difficult then that's fine, but at least for 9/10 of the cases I've found here it would help for films – especially remakes
[22:03:13] iamlindoro: I think that is a good idea... though I can't just iterate through the result list and take the first, since sometimes you're talking about a movie and it's documentary, or a movie and the "Special edition," or whatever... so I'l have to think about how best to do it
[22:03:18] iamlindoro: But it is a good idea
[22:09:54] kth (kth!~kth@unaffiliated/kth) has joined #mythtv
[22:10:33] kth (kth!~kth@unaffiliated/kth) has quit (Client Quit)
[22:10:45] stuartm: I'd imagined it mostly being useful where you've got two or more matches with exact titles but different years, since the UK xmltv source provides the year of release it acts as a clear tie breaker – unless two of those results use that same year
[22:11:08] iamlindoro: stuartm: yeah, definitely, especially given the last ten years of remakes (Clash of the Titans, etc.)
[22:11:19] iamlindoro: I'll look at it tonight, I have the day to work once I get some sleep
[22:18:21] jya (jya!~jyavenard@mythtv/developer/jya) has quit (Ping timeout: 255 seconds)
[22:20:51] zombor (zombor!~zombor_@kohana/developer/zombor) has joined #mythtv
[22:34:27] boringuy (boringuy!~andy@user-0cej14o.cable.mindspring.com) has quit (Remote host closed the connection)
[23:10:00] J-e-f-f-A (J-e-f-f-A!~J-e-f-f-A@unaffiliated/j-e-f-f-a) has joined #mythtv

IRC Logs collected by BeirdoBot.
Please use the above link to report any bugs.