|
Hardware Support Discussions related to using various hardware setups with SageTV products. Anything relating to capture cards, remotes, infrared receivers/transmitters, system compatibility or other hardware related problems or suggestions should be posted here. |
|
Thread Tools | Search this Thread | Display Modes |
#1
|
||||
|
||||
Potentially 15TB Sage Server Setup Advice
I have been running Sage here in the house for about 4 months. The first two months were on experimental hardware with various configurations, and the last two months have been using more final hardware with the HD Extenders. It is now time to clean up the server closet and close out this project with a long term configuration. I have a few ideas and issues that I would like to get the thoughts of the more experienced users out there.
My SageTV server is currently configured as Sage version 6.3.5 on Windows Server 2003 SP2, Dual Xeon 3.0 HT Processors, 2GB RAM, nVidia QuadroFX 1400, 2x500GB (RAID1, OS), 4x500 (RAID5, Recordings), GigE, HDHomeRun, HVR-1600, PVR-500 supporting 3 HD Extenders. It took several loads of the server to get all the issues worked out for smooth video processing which I have achieved. I run DirMon2 and ShowAnalyzer (paid mode) limited to two live sessions at a time. I have a second workstation (dual Xeon 3.2, 4GB) that in addition to it's regular duties, I have it running another two sessions of ShowAnalyzer to catch any overflow on the main server. All machines use Gigabit network cards connected to a single 24 port Gigabit switch so the network ShowAnalyzer sessions run almost as well as local with very little impact on the server. I have considered changing the timing on the DirMon2 configs so that the network sessions are the first choices and the local sessions are secondary, but haven’t enabled that as yet. The digital tuners I have are all connected to QAM sources, but the HVR-1600 shows a much lower signal strength and quality then the HDHR do so I have that tuner disabled. I will probably move it to OTA recording at some point. Right now my signal is coming from my local TWC for everything, but I still have DirecTV connected to the Tivo that this system is replacing. My DirecTV account is currently suspended pending some testing I want to do with the new USB HD Tuner. Given the number of channels in HD that DirecTV has compared to TWC, that choice is still up in the air. My main issues at this point relate to storage. The big RAID card (OEM version of the Adaptec 2810SA with only 6 ports) has the capacity for 6 drives in SATA I mode (1.5Gbps). It is not a stellar performer and is on the replacement list. It does not support NCQ even though all the drives that I have do. I originally connected four drives to that card, configured the array (RAID5) and partitioned off 40GB for the OS leaving the remaining 1.3TB for recordings. Unfortunately, I had major issues with skips and digital video artifacts (blocky recording) that careful performance tracking was isolated to the disk going to 100% utilization for long periods. For some reason, having the OS share the volume with the recording disk just caused all sorts of contention. I originally thought this was related to swap file usage (since only 2GB of RAM in server), but tracking that the server almost never pages out data and swap utilization is never over 5%. Therefore, I added two additional disks on the motherboard raid controller (Adaptec Host RAID) configured them as a mirror pair and reinstalled the system with the OS on the mirror pair (along with my audio and pictures) and dedicated the RAID5 set to recordings. The highest load that I have put on the server with this configuration is recording 2 HD and 2 SD streams, running 4 ShowAnalyzers (two local, two remote) and playing back 2 HD and 1 SD streams. I was able to do this with disk, processor and memory throughput to spare. The server also has two external USB drives that I use mostly for DVD storage. I do not like that there is no redundancy for these drives. I still have all the DVD’s on the original discs and could re-rip, but I would rather not have to do that if needed and external disks like this are messy for connections and such. Now to the problem, the chassis that I have for the server is has the capacity for 3 5.25” drives and 3 3.5” drives. For the moment, it has two optical disks (1 DVD & 1 DL DVDRW) and the 4 disks of the main array internally mounted. Three in the 3.5” inch bays and 1 in a 5.25” drive bay using an adapter. Unfortunately, this leaves no room for the OS drive mirror set so they are sitting on the top of the case with the case slightly open to snake the power and SATA cables outside. For obvious reasons, this cannot stay this way. I would like to put in a solution that has room to grow. Sitting on my spare parts pile is a Coolermaster Stacker full tower case with 11 5.25” drive bays. What I am considering is to move the drives into hot-plug backplanes in the tower case and then use eSATA Port Multiplier technology to connect them back to the main server. Doing it this way, this spare case has the capacity for 18 drives that I could grow into. Here are some links to the components in the solution: Backplanes Port Multipliers Controller Card Technology Overview Here is a Photoshop mockup of the finished solution: The fully populated hard drive tower would draw only 250W of power under full load random seeks. It would nearly double that during startup, but 500W power supplies are mundane and I have one in the parts pile. Fully populated the total solution would be very clean from a connection standpoint having only two power connectors and 4 eSATA cables between the boxes plus the regular connections to the host for tuners, networking and such. The initial cost of putting in the system would be around $620 in addition to what I already have into the solution (about $2,300 including 3 HD Extenders). This would add an additional 3 500GB drives and have a total usable storage capacity around 3TB. This would add a triple drive backplane to the main chassis for 6 internal drives and the first triple backplane on the external drive tower for 3 total external drives. I could then add up to three additional blocks (5 disk backplanes) of storage. At today’s prices a 2TB block would cost $675, a 3TB block would cost $940, and a 4TB block would cost $1,600. The major problems that I think I will run into are: 1. Where to mount the port multipliers since the stacker case is a standard ATX case that will be running without a motherboard? 2. Given that the chassis will not have a motherboard, how do I get the signal from the power switch on the chassis to the power supply to turn on? According to the manufacturer, I will be able to get the full 3Gbps throughput on each disk in the system. This is clearly burst speed on each disk since totaling that speed across all the disks simultaneously (54Gbps) is faster than the PCI-X bus (typically 8 or 16Gbps) or a 4X PCI-Express bus (about 8Gbps). Some of the performance numbers that they posted are very impressive, but benchmarks can be very deceiving. The truly amazing part of all this is that I am getting all of this storage at the cost of a single PCI-X slot in my host machine. In theory, I could replicate this configuration two more times on the open slots I have left in the server. (Potentially 49TB of storage with all 1TB disks after redundancy overhead.) My main questions for the group are: 1. Has anyone built anything remotely along the lines of this solution or parts of this solution (most notably the Port Multipliers) and if so, what were your experiences? 2. Are there any recommendations anyone has to modify the design as suggested? 3. Are there any simpler, cheaper and/or cleaner solutions that will give this kind of current capacity that can scale to 5x that capacity for little or no effort in the future? 4. Does anyone have any comments and/or recommendations about the solution I have implemented so far? I appreciate your thoughts and suggestions in advance, Chris
__________________
Server: Sage v6.6.x, 2003 SP2, 2xXeon 3.6, 2GB, ATI Radeon 7000, 2x750GB (RAID1, OS), 6x500 (RAID5, Recordings), GigE, HDHR, HD PVR, PVR-500 Client: STX-HD100, Panasonic 50-LC13(HDMI>DVI), Denon 3803(Optical) Client: STX-HD100, Vizio VW37L(HDMI) Client: STX-HD100, Sharp 27" SDTV(S-Video/RCA) Client: Sage v6.6.x, Win7 Ultimate x64, Dell Latitude E6500, Core 2 Duo 2.5, 4GB, Quadro 160M, GigE/11N Client: Sage v6.6.x, XP Tablet, Acer Travelmate C310, Centrino 1.5, 512MB, Integrated, GigE/11G |
#2
|
||||
|
||||
All I can say is WOW! I have been looking into buying an external inclosure to do exactly what you are trying to do. I never thought to do a home brew solution. You might find that buying a pre-made solution might not be much more expensive though. There is a number of probucts on newegg that are exactly like what you are trying to build.
For the one question that you had about turning the system on. I believe there is pins on the power supply that control turning the power supply on and off. I do not know if you can tie those pins into the existing power switch on the case or not. I would look at guides for water cooling a system. They used to use more than one power supply to run all of the pumps in their systems. They might have some diagrams on how to turn the power supply on an off. The only thing I would be worried about is wether or not it is good to run the power supply without the load of the motherboard. You also will need to take heat into consideration. All of those drives will be putting off a lot of heat, and the case is not designed to funnel that type of heat out of it. I would like to see this finished. If it comes out good I would be intrested in following your design. It would offer a nice expandable storage array at a decent cost! |
#3
|
||||
|
||||
That case itself has tons of cooling options and could easily move enough air out. Each of the backplanes has a dedicated cooling fan on the back and the case itself has room for 3 or 4 80mm fans and 2 120mm fans. That should produce a whirlwind of air moving through the case. Additionally, the closet in the house where I have it also has an exhaust boost fan that moves about 100cfm of air out of the closet back into the return side of the HVAC.
When I looked at prebuilt systems, anything that I could find out there was in the 3,000+ range when you get into a capacity of 16 drives or more. This solution completed without drives (or the case and powersupply that's about $225 worth) is about $900. That includes all 4 of the hot-swap backplanes with all the drive caddies, 4 port multiplier cards to support each backplane and the host controller in the server. The total cost including 500GB drives would be around $2,500 which is cheaper then most of the pre-built racks I saw without drives. That would be 9TB of total storage or 7TB of usable storage after RAID redundancy. Can you post some of the links to the items you are talking about on NewEgg? Most of the external units I have seen are in the $125 per drive range if they are dumb attached storage (usb raid, etc) and $200-$250 per drive for smart storage (nas raid, hpm, etc). Thanks, Chris
__________________
Server: Sage v6.6.x, 2003 SP2, 2xXeon 3.6, 2GB, ATI Radeon 7000, 2x750GB (RAID1, OS), 6x500 (RAID5, Recordings), GigE, HDHR, HD PVR, PVR-500 Client: STX-HD100, Panasonic 50-LC13(HDMI>DVI), Denon 3803(Optical) Client: STX-HD100, Vizio VW37L(HDMI) Client: STX-HD100, Sharp 27" SDTV(S-Video/RCA) Client: Sage v6.6.x, Win7 Ultimate x64, Dell Latitude E6500, Core 2 Duo 2.5, 4GB, Quadro 160M, GigE/11N Client: Sage v6.6.x, XP Tablet, Acer Travelmate C310, Centrino 1.5, 512MB, Integrated, GigE/11G |
#4
|
||||
|
||||
I just looked around at some of the items on NewEgg and was suprised to discover that several of the vendors are selling exactly the parts I am talking about using in other formats... This one has a lot of potential at $50 per disk and the standard rack mount form factor has potential too:
http://www.newegg.com/Product/Produc...82E16816133007 The biggest negative that I see with this solution is that without the port multipliers, this rack would require one card in the host per bank of 4 drives in the chassis... Thoughts? Chris
__________________
Server: Sage v6.6.x, 2003 SP2, 2xXeon 3.6, 2GB, ATI Radeon 7000, 2x750GB (RAID1, OS), 6x500 (RAID5, Recordings), GigE, HDHR, HD PVR, PVR-500 Client: STX-HD100, Panasonic 50-LC13(HDMI>DVI), Denon 3803(Optical) Client: STX-HD100, Vizio VW37L(HDMI) Client: STX-HD100, Sharp 27" SDTV(S-Video/RCA) Client: Sage v6.6.x, Win7 Ultimate x64, Dell Latitude E6500, Core 2 Duo 2.5, 4GB, Quadro 160M, GigE/11N Client: Sage v6.6.x, XP Tablet, Acer Travelmate C310, Centrino 1.5, 512MB, Integrated, GigE/11G |
#5
|
|||
|
|||
Chris, that is the exact part I was going to point you too. I saw some people using it over at AVS Forum.
|
#6
|
||||
|
||||
Have you considered keeping the server and RAID in one machine?
I have this case and love it. It is very loud with the hot swap fans running. http://www.serversdirect.com/product.asp?pf_id=CS4471 I am running 18 drives in there now ...16 in the hotswap bays and two more just connected normally above the hotswap bays. |
#7
|
||||
|
||||
I found a solution that could manage this with two cards and would still have room to manage in internal array in the server for booting... It is still more than the original solution ($1200 vs. $900) and it only holds 12 drives instead of 18, but each RAID array would be hardware accelerated which is a major bonus. The performance numbers I have seen on most of the Highpoint cards (including some personal experience) are very impressive.
Certainly some intriguing food for thought.... Chris
__________________
Server: Sage v6.6.x, 2003 SP2, 2xXeon 3.6, 2GB, ATI Radeon 7000, 2x750GB (RAID1, OS), 6x500 (RAID5, Recordings), GigE, HDHR, HD PVR, PVR-500 Client: STX-HD100, Panasonic 50-LC13(HDMI>DVI), Denon 3803(Optical) Client: STX-HD100, Vizio VW37L(HDMI) Client: STX-HD100, Sharp 27" SDTV(S-Video/RCA) Client: Sage v6.6.x, Win7 Ultimate x64, Dell Latitude E6500, Core 2 Duo 2.5, 4GB, Quadro 160M, GigE/11N Client: Sage v6.6.x, XP Tablet, Acer Travelmate C310, Centrino 1.5, 512MB, Integrated, GigE/11G |
#8
|
||||
|
||||
Quote:
How are you controlling the drives? I am guessing you are using one of the large 16 port RAID cards like the RocketRAID 2340 to control the drives in the hot swap trays and then the onboard raid for the other two. Chris
__________________
Server: Sage v6.6.x, 2003 SP2, 2xXeon 3.6, 2GB, ATI Radeon 7000, 2x750GB (RAID1, OS), 6x500 (RAID5, Recordings), GigE, HDHR, HD PVR, PVR-500 Client: STX-HD100, Panasonic 50-LC13(HDMI>DVI), Denon 3803(Optical) Client: STX-HD100, Vizio VW37L(HDMI) Client: STX-HD100, Sharp 27" SDTV(S-Video/RCA) Client: Sage v6.6.x, Win7 Ultimate x64, Dell Latitude E6500, Core 2 Duo 2.5, 4GB, Quadro 160M, GigE/11N Client: Sage v6.6.x, XP Tablet, Acer Travelmate C310, Centrino 1.5, 512MB, Integrated, GigE/11G |
#9
|
||||
|
||||
Close I'm running an Areca ARC-1261ML card with the 16 drives connected to the backplanes in the following configuration:
8 x 500GB Seagate ST3500320AS drives in RAID 6 yields 2.72TB Usable 2 x 1TB Hitachi HDS721010KLA330 in RAID 1 yields 931GB Usable 2 x 500GB Western Digital WD5000YS in RAID 1 yields 465 Usable 2 x 500GB Western Digital WD5000YS in RAID 1 yields 465 Usable 2 x 500GB Western Digital WD5000YS in RAID 1 yields 465 Usable Then for my boot drive I'm running two 150GB Raptors in RAID 1 using onboard Nvidia controller. |
#10
|
||||
|
||||
Quote:
Chris
__________________
Server: Sage v6.6.x, 2003 SP2, 2xXeon 3.6, 2GB, ATI Radeon 7000, 2x750GB (RAID1, OS), 6x500 (RAID5, Recordings), GigE, HDHR, HD PVR, PVR-500 Client: STX-HD100, Panasonic 50-LC13(HDMI>DVI), Denon 3803(Optical) Client: STX-HD100, Vizio VW37L(HDMI) Client: STX-HD100, Sharp 27" SDTV(S-Video/RCA) Client: Sage v6.6.x, Win7 Ultimate x64, Dell Latitude E6500, Core 2 Duo 2.5, 4GB, Quadro 160M, GigE/11N Client: Sage v6.6.x, XP Tablet, Acer Travelmate C310, Centrino 1.5, 512MB, Integrated, GigE/11G |
#11
|
||||
|
||||
I needed the fast card. I sometimes record 5 HD streams at around 18Mbit/s each and 5 SD recordings at around 12Mbit/s each. I also don't trust the redundancy of RAID 5 and wanted the extra protection given by RAID 6 which also needs more horsepower.
Eventually when I need to I'll take 8 drives out and put in another RAID 6 of 1TB drives or 2TB if they are out and affordable by then and then repeat the process every time I need more space removing the smaller RAID 6. |
#12
|
|||
|
|||
Ok, a few things...
First off, that controller is basically just a vanilla SATA controller. It has the SI 3124, which supports FIS switched port multipliers (PMP's), which is very good for performance. The model of PMP you cite is based on the SI 3726, which supports FIS so all is good news there. The bad news is effectively you would be running software raid under windows server 2003. I would very much recommend against this, as I have had a number of friends who ran into lots of issues with this configuration. Those highpoint cards aren't that much better. If you really want to stay with 2003 server, I would go with an areca hardware based controller that supports PMP's. Now, Linux would support this configuration very very well, but that would mean you running a separate server running the storage. I do this, but it doesn't sound like you want to separate the functions, so that doesn't seem like a good option. I understand from reports of others on avsforum that windows server 2008 has MUCH better software raid support, with dramtic imporovements in reliability and also it's quite fast. If you want to stay in pretty much the config you outlined, that's probably the easiest thing for you to do, assuming you can get all the other software etc... to run under server 2008. Also, the next gen of Intel southbridge, the ICH10, and ICH10R, will have full FIS based support for PMP's in the motherboard SATA ports. These should begin shipping in April or so. Most of the upcoming P45 and G45 based motherboards will have ICH10 SATA ports, and should have very good performance in both windows (with server 2008) and in Linux with PMP's. This will be a pretty cheap way to go if you want to put storage in it's own dedicated server. As for the Stacker, I have that case, and you absolutely can run it with 4 5in3 SATA racks, and it will have plenty of room for you to stick the PMP's inside. Case ventilation is less important in this case than the 5in3 racks having good forced air through the rack. The Supermicro CSE-M35-1's work great, as do the addonics 5in3's which are also remarketed by several vendors like AMS, which are available at Fry's. There will be no issue with running a PSU in the case just for the disks, but you will need a motherboard "PSU tester" or jumpers in the BTX power supply cable to get the PSU to start without a motherboard. Doing it this way means only needing to run 4 SATA cables to the case. Personally, I would use 4 external eSATA ports using internal SATA to a PCI style external bracket with eSATA connectors, and the same brackets on the stacker case, so all you have are 4 eSATA cables between the server case and the storage case. This should be clean, and will work as long as the two cases are not too far apart. Note however, that you will be subject to the PCI-X bus limits, which shouldn't be too bad, but if you had a server with 6 motherboard based SATA 2 ports, which with full FIS support like the ICH10R's, I think it would go faster. Of course, with multiple PCIX cards, you could even the load out as well. Hope this is helpful.
__________________
Server: Sage 6.5.9 - X2 3800+, DFI NF4 MB, 1 GB, 300 GB HD (system disk), NV 7600GS, - Windows XP SP2 Client 1: Sage 6.5.9 - E7200, Abit IP35 Pro, ATI 4850 with HDMI connect to Denon 3808CI and Sony A3000 SXRD TV Client 2: HD200 connected to Denon 3808CI and A3000 SXRD TV Client 3: Media MVP to 15" Toshiba LCD Client 4: HD100 connected to Samsung 23" 720P LCD Client 5: HD100 connected to Vizio VX37L Last edited by mikesm; 02-01-2008 at 01:07 PM. |
#13
|
|||
|
|||
I can't begin to afford all of this currently, but I can dream, and what you've shown looks very impressive. I can offer a couple of comments though.
Quote:
There are a number of threads on the Internet concerning dual power supplies, and ways to properly integrate them, but the best advice always is to use just one high power, high quality power supply. If you still prefer using 2 PS's, a relay connection is safer than just connecting the Power Good line and a ground, and there are instructions out there for that. You are right that you won't need much power once the drives are up and running, but you will need a LOT at startup, more than you have indicated, unless you can get staggered spinup to work, and that's a big if. Personally, I would recommend a PC Power & Cooling 700 or bigger, with one huge 12V rail. This 610 watt is at a great price, and good for 12 drives, but I'm not sure about 16 drives or more: http://www.newegg.com/product/produc...82E16817703005. |
#14
|
|||
|
|||
Ok, this is an impressive project, but there's one point that cannot be stressed enough: DO NOT RUN RAID-5 OVER 18 DRIVES, or even 12. I wouldn't even trust that for RAID-6 (2 parity drives). MTBF is your enemy in multiple drive setups, and when you get to large scale arrays you are pretty much guaranteeing drive failure at an frightening rate. See this for background: http://answers.google.com/answers/threadview?id=390140
Your best bet is to run several smaller RAID-5 groupings or mirrored pairs. This way multiple simultaneous drive failures are less likely to take out a set. Second point: Both Software RAID-5 and unaccelerated hardware RAID-5 will be unacceptable for any but the smallest arrays, especially under the data load. Third point: Redundant power. Obviously, I would expect you to run this setup on a UPS, but you are still left with a single point of failure at the power supply. There are a number of good options available, some that even fit into the space of a normal PSU. I have to say, it's an intriguing project. But remember, there's a reason that professionally built systems cost so much. You can build this on a shoestring, but shoestrings can't be counted on to carry much weight without failing. Good luck, and please do keep us posted with your progress. |
#15
|
|||
|
|||
Quote:
Also, I prefer to mount PMP's next to the disk racks using double sticky tape instead of using the PCI style versions. It makes for shorter cable runs and neater installs. If there is no motherboard in the case, that space is great for mounting PMPs and good cable management.
__________________
Server: Sage 6.5.9 - X2 3800+, DFI NF4 MB, 1 GB, 300 GB HD (system disk), NV 7600GS, - Windows XP SP2 Client 1: Sage 6.5.9 - E7200, Abit IP35 Pro, ATI 4850 with HDMI connect to Denon 3808CI and Sony A3000 SXRD TV Client 2: HD200 connected to Denon 3808CI and A3000 SXRD TV Client 3: Media MVP to 15" Toshiba LCD Client 4: HD100 connected to Samsung 23" 720P LCD Client 5: HD100 connected to Vizio VX37L |
#16
|
||||
|
||||
I was planning to run the stacker with one power supply. The case has the ability to contain two and therefore comes with the adapter that allows the first power supply to signal the second power supply to turn on. The reason that I mention it is that I was wondering if that will give me what I need to make the first power supply turn on without a motherboard.
I was really suprised to discover (after a few comments here) that the 2XXX series of cards from Highpoint are all software cards. I have personally tested a couple of them and they were fast enough for me to assume they were hardware. I have seen a lot of comments about windows 2008 being so much faster in software RAID. Since I have access to a 2008 from my universal account, I think I will put a copy on my test box (from first Sage setup) and see how it works. I will keep everyone posted on this point. Thanks for all the great advice... Chris
__________________
Server: Sage v6.6.x, 2003 SP2, 2xXeon 3.6, 2GB, ATI Radeon 7000, 2x750GB (RAID1, OS), 6x500 (RAID5, Recordings), GigE, HDHR, HD PVR, PVR-500 Client: STX-HD100, Panasonic 50-LC13(HDMI>DVI), Denon 3803(Optical) Client: STX-HD100, Vizio VW37L(HDMI) Client: STX-HD100, Sharp 27" SDTV(S-Video/RCA) Client: Sage v6.6.x, Win7 Ultimate x64, Dell Latitude E6500, Core 2 Duo 2.5, 4GB, Quadro 160M, GigE/11N Client: Sage v6.6.x, XP Tablet, Acer Travelmate C310, Centrino 1.5, 512MB, Integrated, GigE/11G |
#17
|
|||
|
|||
Quote:
If you comfortable with linux, the software raid there is much more mature than in windows 2008, and works fast too. You can check out this thread for folks results with NAS's... http://forums.sagetv.com/forums/showthread.php?t=25709 I don't think that adapter will make a single PSU switch on, but this page will tell you: http://www.dvhardware.net/modules.ph...rticle&artid=5 Just jumper the BTX connector as shown and you should be good to go. Make sure it's properly taped up so the jumper doesn't contact something in the case. Good luck!
__________________
Server: Sage 6.5.9 - X2 3800+, DFI NF4 MB, 1 GB, 300 GB HD (system disk), NV 7600GS, - Windows XP SP2 Client 1: Sage 6.5.9 - E7200, Abit IP35 Pro, ATI 4850 with HDMI connect to Denon 3808CI and Sony A3000 SXRD TV Client 2: HD200 connected to Denon 3808CI and A3000 SXRD TV Client 3: Media MVP to 15" Toshiba LCD Client 4: HD100 connected to Samsung 23" 720P LCD Client 5: HD100 connected to Vizio VX37L Last edited by mikesm; 02-01-2008 at 11:04 PM. |
Currently Active Users Viewing This Thread: 1 (0 members and 1 guests) | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Media Externder cannot find Sage Server | Chas_STV | SageTV Media Extender | 2 | 01-13-2008 06:15 PM |
Sage Server v6.3 & network encoder v5.x? | heffe2001 | SageTV Software | 0 | 12-27-2007 01:13 PM |
Sage Server Down? | srothwell | SageTV EPG Service | 5 | 06-11-2007 09:08 PM |
Getting odd error when downloading program guide data | rdefino | SageTV EPG Service | 34 | 08-23-2006 12:03 PM |
Sage and Fusion Lite setup? | gateslinger | Hardware Support | 5 | 07-10-2006 02:23 PM |