SageTV Community  

Go Back   SageTV Community > Hardware Support > Hardware Support

Notices

Hardware Support Discussions related to using various hardware setups with SageTV products. Anything relating to capture cards, remotes, infrared receivers/transmitters, system compatibility or other hardware related problems or suggestions should be posted here.

Reply
 
Thread Tools Search this Thread Display Modes
  #41  
Old 06-12-2007, 04:20 PM
valnar valnar is offline
Sage Icon
 
Join Date: Oct 2003
Posts: 1,220
Send a message via ICQ to valnar
Quote:
Originally Posted by Lucas View Post
For NasliteV2, for starters, you don't need to know anything about Linux.
Around 1Ghz CPU is about the sweetspot. One of my servers has a P4 1.8Ghz celeron. The other a Pentium II 233MHz with a 200W PSU and 6 Drives.

The P4 CPU utilisation never goes above 40% and averages around 5%.
The PII averages around 30%.

What's your throughput to a P4 or better class computer with a 1Ghz NAS PC? The only problem I forsee building your own on an "older" PC is the PCI bus bandwidth limit, especially if a gigabit NIC shares the bus with anything else.

My 2 cents.

Robert
__________________
Server: ASUS P7F-E, Intel Xeon x3440, 4GB RAM, DR-400-6 RM case, 2x Addonics Disk Array 3SA, 7TB disk,
2x Nvidia DualTV, 2x HDHomeRun, HD-PVR 1212, Win7-32bit, Sage 7 latest
Client 1: HD300 on Panasonic Plasma TC-P50ST60
Client 2: HD300 on Vizio 40"
Others: Sage Client & Placeshifter
Reply With Quote
  #42  
Old 06-12-2007, 04:59 PM
KJake KJake is offline
Sage Icon
 
Join Date: May 2003
Location: West Michigan
Posts: 1,117
Well, I took the plunge and I'm coming up with justification phrases in my head to tell my wife

Here's what I ordered:
Lian-Li PC-V2100B PLUSII Case
ASUS P5K-V Motherboard (going to use onboard Video)
ST3500630AS (6x500GB Seagates)
W0132RU (Thermaltake 1000W PSU)
VS1GB667D2 (2x1G DDR2 PC-5300 Corsair RAM)
Then I have an Intel X3220 at the office that can't be used - I'll buy it from them.
Sony DVD-ROM
And I will have a spare 250G PATA drive that I will put the OS on, I'm going with Linux.

That should have no problem caculating parity

Last edited by KJake; 06-12-2007 at 05:08 PM.
Reply With Quote
  #43  
Old 06-12-2007, 05:35 PM
valnar valnar is offline
Sage Icon
 
Join Date: Oct 2003
Posts: 1,220
Send a message via ICQ to valnar
Quote:
Originally Posted by KJake View Post
Well, I took the plunge and I'm coming up with justification phrases in my head to tell my wife

Here's what I ordered:
Lian-Li PC-V2100B PLUSII Case
ASUS P5K-V Motherboard (going to use onboard Video)
ST3500630AS (6x500GB Seagates)
W0132RU (Thermaltake 1000W PSU)
VS1GB667D2 (2x1G DDR2 PC-5300 Corsair RAM)
Then I have an Intel X3220 at the office that can't be used - I'll buy it from them.
Sony DVD-ROM
And I will have a spare 250G PATA drive that I will put the OS on, I'm going with Linux.

That should have no problem caculating parity
You seriously do not need a 1000W PSU. In fact, when I saw them come out, I initially thought it was a joke.

-Robert
__________________
Server: ASUS P7F-E, Intel Xeon x3440, 4GB RAM, DR-400-6 RM case, 2x Addonics Disk Array 3SA, 7TB disk,
2x Nvidia DualTV, 2x HDHomeRun, HD-PVR 1212, Win7-32bit, Sage 7 latest
Client 1: HD300 on Panasonic Plasma TC-P50ST60
Client 2: HD300 on Vizio 40"
Others: Sage Client & Placeshifter
Reply With Quote
  #44  
Old 06-12-2007, 05:39 PM
KJake KJake is offline
Sage Icon
 
Join Date: May 2003
Location: West Michigan
Posts: 1,117
Quote:
Originally Posted by valnar View Post
You seriously do not need a 1000W PSU. In fact, when I saw them come out, I initially thought it was a joke.
Oh well, I figured that they're more for SLI setups, but I didn't want to run into problem if I packed this case full of hard drives. Too late now, the order has processed.
Reply With Quote
  #45  
Old 06-12-2007, 05:43 PM
valnar valnar is offline
Sage Icon
 
Join Date: Oct 2003
Posts: 1,220
Send a message via ICQ to valnar
http://www.silentpcreview.com/forums...ic.php?t=40292


-Robert
__________________
Server: ASUS P7F-E, Intel Xeon x3440, 4GB RAM, DR-400-6 RM case, 2x Addonics Disk Array 3SA, 7TB disk,
2x Nvidia DualTV, 2x HDHomeRun, HD-PVR 1212, Win7-32bit, Sage 7 latest
Client 1: HD300 on Panasonic Plasma TC-P50ST60
Client 2: HD300 on Vizio 40"
Others: Sage Client & Placeshifter
Reply With Quote
  #46  
Old 06-12-2007, 05:48 PM
KJake KJake is offline
Sage Icon
 
Join Date: May 2003
Location: West Michigan
Posts: 1,117
I guess I'm not too far out of line based on the links posted there. It's a Quad Core processor and I may eventually have over 20 hard drives...I like to plan for the future.
Reply With Quote
  #47  
Old 06-12-2007, 11:19 PM
Lucas Lucas is offline
Sage Icon
 
Join Date: Aug 2004
Location: Greece
Posts: 1,156
Wayyy over the top for Linux and NAS duties!
__________________
Windows 10 64bit - Server: C2D, 6Gb RAM, 1xSamsung 840 Pro 128Gb, Seagate Archive HD 8TB - 2 x WD Green 1TB HDs for Recordings, PVR-USB2,Cinergy 2400i DVB-T, 2xTT DVB-S2 tuners, FireDTV S2
3 x HD300s
Reply With Quote
  #48  
Old 06-12-2007, 11:31 PM
Lucas Lucas is offline
Sage Icon
 
Join Date: Aug 2004
Location: Greece
Posts: 1,156
Quote:
Originally Posted by valnar View Post
What's your throughput to a P4 or better class computer with a 1Ghz NAS PC? The only problem I forsee building your own on an "older" PC is the PCI bus bandwidth limit, especially if a gigabit NIC shares the bus with anything else.

My 2 cents.

Robert
Around 24Mbyte/s sustained writes to the RAID 5 Array.

Around 15Mbyte/s writes to single disk drives.

Even the PentiumII 233Mhz manages around 12Mbyte/s writes.

This is with SMB from WinXP clients. I have noticed that there is a limitation in WinXP.

Accesses over ftp or nfs are much faster. On Server mobos with 64bit PCI-X or other the throuput can go up to 40Mbyte/s.

PCI bus is probably the limit along with SMB protocol limitations on the client side.
__________________
Windows 10 64bit - Server: C2D, 6Gb RAM, 1xSamsung 840 Pro 128Gb, Seagate Archive HD 8TB - 2 x WD Green 1TB HDs for Recordings, PVR-USB2,Cinergy 2400i DVB-T, 2xTT DVB-S2 tuners, FireDTV S2
3 x HD300s
Reply With Quote
  #49  
Old 06-13-2007, 12:12 AM
mikesm mikesm is offline
Sage Icon
 
Join Date: Jul 2003
Posts: 1,293
My system can deliver just under 100 MB/s in read performance, less for writes. It's complete overkill for HTPC serving, but as I said, it didn't cost much to get there. The disks are plenty fast anyway and that's the bulk of the cost.

The key to speed is enough memory to buffer multiple client requests, avoiding the PCI bus not just for disks, but Gigabit interface, and supporting jumbo frames to systems who support it. This is all via SAMBA file sharing.

I haven't measured speed via FTP or HTTP via Webdav, but it's very fast...

Thanks,
Mike
Reply With Quote
  #50  
Old 06-13-2007, 03:06 AM
Lucas Lucas is offline
Sage Icon
 
Join Date: Aug 2004
Location: Greece
Posts: 1,156
100MByte/s is huge! Well done! That's a fully saturated gigabit LAN.

I agree that memory helps with buffering. I have only tested with copying large (10-20Gb) files back and forth. In this case the RAM doesn't help much.

I get a peak at the start and then when the buffer fills up the transfers settle down to what the disks and controllers can handle.

Even the 25 or so I get are more than enough for having 4-5 clients concurrently.
__________________
Windows 10 64bit - Server: C2D, 6Gb RAM, 1xSamsung 840 Pro 128Gb, Seagate Archive HD 8TB - 2 x WD Green 1TB HDs for Recordings, PVR-USB2,Cinergy 2400i DVB-T, 2xTT DVB-S2 tuners, FireDTV S2
3 x HD300s
Reply With Quote
  #51  
Old 06-13-2007, 05:42 AM
KJake KJake is offline
Sage Icon
 
Join Date: May 2003
Location: West Michigan
Posts: 1,117
Quote:
Originally Posted by Lucas View Post
Wayyy over the top for Linux and NAS duties!
I plan on replacing my current Linux server with this one. It will pick-up everything that I had the old server doing, plus more. I have been bitten too many times in the past by "playing it safe" when picking out components and I end up having to replace things only a year or two later, I want this to last 5+ years.
Reply With Quote
  #52  
Old 06-13-2007, 10:13 AM
KJake KJake is offline
Sage Icon
 
Join Date: May 2003
Location: West Michigan
Posts: 1,117
Quote:
Originally Posted by KJake View Post
I guess I'm not too far out of line based on the links posted there. It's a Quad Core processor and I may eventually have over 20 hard drives...I like to plan for the future.
Whoops, I'm wrong I'm not sure how I was reading that last night...I guess I could stick with a 600W and still be playing it safe. Not sure if there is any chance that I can return the 1000W behemoth, but I would have changed the order had it not already been processed last night. Glad I didn't go for the 1200W...
Reply With Quote
  #53  
Old 06-13-2007, 11:21 AM
mikesm mikesm is offline
Sage Icon
 
Join Date: Jul 2003
Posts: 1,293
Quote:
Originally Posted by KJake View Post
Whoops, I'm wrong I'm not sure how I was reading that last night...I guess I could stick with a 600W and still be playing it safe. Not sure if there is any chance that I can return the 1000W behemoth, but I would have changed the order had it not already been processed last night. Glad I didn't go for the 1200W...
Just get a decent 400-600W PSU, and a case that allows you to add a a seperate PSU for the disks. Disks really only care about +12V, and if you can stagger their spinup, you probably don't even need a seperate PSU. They each pull about 1A off the +12 on spinoff, but it drops to a fraction of that shortly after.

Thanks,
Mike
Reply With Quote
  #54  
Old 06-13-2007, 11:53 AM
KJake KJake is offline
Sage Icon
 
Join Date: May 2003
Location: West Michigan
Posts: 1,117
Quote:
Originally Posted by mikesm View Post
Just get a decent 400-600W PSU, and a case that allows you to add a a seperate PSU for the disks. Disks really only care about +12V, and if you can stagger their spinup, you probably don't even need a seperate PSU. They each pull about 1A off the +12 on spinoff, but it drops to a fraction of that shortly after.
No opening in the case for a second PSU...is the 1000W an OK thing then since effectively I'd the have power of two PSUs?
Reply With Quote
  #55  
Old 06-13-2007, 12:34 PM
mikesm mikesm is offline
Sage Icon
 
Join Date: Jul 2003
Posts: 1,293
Quote:
Originally Posted by KJake View Post
No opening in the case for a second PSU...is the 1000W an OK thing then since effectively I'd the have power of two PSUs?
Use a CM-Stacker. It's the best case for this.

Thanks
Mike
Reply With Quote
  #56  
Old 07-15-2007, 06:11 PM
KJake KJake is offline
Sage Icon
 
Join Date: May 2003
Location: West Michigan
Posts: 1,117
Quote:
Originally Posted by mikesm View Post
Also, modern kernels support the use of SATA port multipliers, so each of the 6 SATA 2 ports found on your motherboard can actually drive 5 SATA drives each. That's 30 SATA drives, just off the motherboard SATA ports.
Hey Mike, can you let me know what distro, distro version, and kernel version you're using? I'm trying Ubuntu 7.04 with a vanilla 2.6.22-rc6 patched for PMP and I can see and use all the drives, however it seems like Samba is choking when I try to dump a large amount of data at once, especially if something else on the network tries to access the server via Samba while another system is sending the large amount of data.

I'm just using so much new stuff here, I really don't know if it is the PMP patch, the XFS, or Samba...or something else I haven't considered. I'd just like to use your configuration possibly as a baseline for getting mine running.

Last edited by KJake; 07-15-2007 at 08:34 PM.
Reply With Quote
  #57  
Old 07-15-2007, 09:45 PM
mikesm mikesm is offline
Sage Icon
 
Join Date: Jul 2003
Posts: 1,293
Quote:
Originally Posted by KJake View Post
Hey Mike, can you let me know what distro, distro version, and kernel version you're using? I'm trying Ubuntu 7.04 with a vanilla 2.6.22-rc6 patched for PMP and I can see and use all the drives, however it seems like Samba is choking when I try to dump a large amount of data at once, especially if something else on the network tries to access the server via Samba while another system is sending the large amount of data.

I'm just using so much new stuff here, I really don't know if it is the PMP patch, the XFS, or Samba...or something else I haven't considered. I'd just like to use your configuration possibly as a baseline for getting mine running.
I am running Suse 10.2, with a patched 2.6.18 kernel. The latest patches will enable PMP support for AHCI ports, otherwise the older patches just worked only on the 3132 PCI-E SATA controller.

Is the ethernet on a PCI-E bus? If not, that could be a bottleneck when you hit it with a lot of traffic. You don't want the GbE controller on PCI. Is your GbE switched? Tell me a bit about your network setup.

Also, what filesystem are you using? XFS is the only way to go for this sort of use, as it handles big files really well. Have you tuned your I/O parameters? I now see north of 150MB/s writes and 200+ MB/s on reads. What are you seeing in terms of performance on the local filesystem, not over the network?

BTW, the Suse 10.3 alpha should have the latest in SATA patches, including AHCI controller support for the PMP's, plus EVMS which makes disk management a lot easier. If you could wait, I'd recommend 10.3, as the stock kernel will do it all. Dunno how comfortable you are with running an alpha distro, but the beta should be out soon.

Also, tell me about your windows network? Domain controller? Did you tune the SAMBA parameters for performance? Things like BUFSIZ, etc...

Thanks,
Mike
Reply With Quote
  #58  
Old 07-16-2007, 08:26 AM
KJake KJake is offline
Sage Icon
 
Join Date: May 2003
Location: West Michigan
Posts: 1,117
Quote:
Originally Posted by mikesm View Post
I am running Suse 10.2, with a patched 2.6.18 kernel. The latest patches will enable PMP support for AHCI ports, otherwise the older patches just worked only on the 3132 PCI-E SATA controller.
Yup, I'm using a 3132 controller with the PMP attached to it. Only 3 of my 6 drives are connected to it. I have the other 3 connected to the on-board SATA controller, which is nVidia SATA. The board only has one internal SATA port for the 3132. I'll need to order another 3132 when I want to add more than 2 drives.

I haven't run SUSE since 9.3 (started using linux with SuSE 7.0) - I have lately preferred .deb to .rpm. But heck, I just want this to work and I'll put it away in the corner to forget about.

Quote:
Originally Posted by mikesm View Post
Is the ethernet on a PCI-E bus? If not, that could be a bottleneck when you hit it with a lot of traffic. You don't want the GbE controller on PCI. Is your GbE switched? Tell me a bit about your network setup.
I'm using the nVidia Ethernet controller currently, but there is also a Marvell GigE controller. The nVidia one isn't clear that it is on the PCI-E bus, but the Marvell is. All the computers are connected using GigE switches. The two that I'm testing with are next to each other on the same switch.
Code:
00:13.0 Bridge: nVidia Corporation CK804 Ethernet Controller (rev a3)
00:14.0 PCI bridge: nVidia Corporation CK804 PCIE Bridge (rev a3)
00:16.0 PCI bridge: nVidia Corporation CK804 PCIE Bridge (rev a3)
00:17.0 PCI bridge: nVidia Corporation CK804 PCIE Bridge (rev a3)
05:00.0 Ethernet controller: Marvell Technology Group Ltd. 88E8053 PCI-E Gigabit Ethernet Controller (rev 22)
Quote:
Originally Posted by mikesm View Post
Also, what filesystem are you using? XFS is the only way to go for this sort of use, as it handles big files really well. Have you tuned your I/O parameters? I now see north of 150MB/s writes and 200+ MB/s on reads. What are you seeing in terms of performance on the local filesystem, not over the network?
I formated the volume as XFS. I don't have any experience with it, so I just did mkfs.xfs /dev/md0 and went on my marry way. The read and write speed really depend on multiple variables, so I've attached IOZone graphs. The speeds are really good for smaller files, and then dip down to <100MB/s when you get above 1GB file sizes (which is what we care about here)...if I'm interpreting the graphs correctly (also why I attached them). I guess I should say that the array is in the process or resyncing right now too, so it may have slowed things down.

Quote:
Originally Posted by mikesm View Post
Also, tell me about your windows network? Domain controller? Did you tune the SAMBA parameters for performance? Things like BUFSIZ, etc...
Just all peer-to-peer systems, no domain controller (I have XP Home systems that can't join a domain). I didn't tune Samba either, I just kinda went at the defaults (besides turning on a share to be wide-open for a dumping ground).

Quote:
Originally Posted by mikesm View Post
BTW, the Suse 10.3 alpha should have the latest in SATA patches, including AHCI controller support for the PMP's, plus EVMS which makes disk management a lot easier. If you could wait, I'd recommend 10.3, as the stock kernel will do it all. Dunno how comfortable you are with running an alpha distro, but the beta should be out soon.
10.3 Alpha 5 is running 2.6.22-rc4 and according to http://home-tj.org/wiki/index.php/Libata-tj-stable, the PMP patch has been submitted for 2.6.23 consideration. So, I'm not sure that running 10.3 will work out of the box.

About the server:
The processor is a Core 2 Duo at 3.2Ghz (overclocked from 2.93 Ghz) and has 2GB of RAM. FSB is running at 1066Mhz. I'm pretty sure that it is fast enough to handle the IO.

[edit]
Oh, and the system that I'm copying large (700M+) files from is a Vista Ultimate system. Are there any known issues? The Samba version is 3.0.24, which solves the NTLMv2 problems that were reported with Vista back in 2006, so I haven't had to make any policy changes or reg hacks on my Vista system, but there may be something else I'm missing?

When the array finishes syncing, maybe I'll try copying a 1GB+ file from my XP MCE system and a 1GB+ file from my Vista system to see which causes it to hangup.
[/edit]
Attached Images
File Type: png ReadPerf.png (60.0 KB, 214 views)
File Type: png WritePerf.png (65.5 KB, 178 views)

Last edited by KJake; 07-16-2007 at 08:59 AM.
Reply With Quote
  #59  
Old 07-16-2007, 11:53 AM
mikesm mikesm is offline
Sage Icon
 
Join Date: Jul 2003
Posts: 1,293
Quote:
Originally Posted by KJake View Post
Yup, I'm using a 3132 controller with the PMP attached to it. Only 3 of my 6 drives are connected to it. I have the other 3 connected to the on-board SATA controller, which is nVidia SATA. The board only has one internal SATA port for the 3132. I'll need to order another 3132 when I want to add more than 2 drives.

I haven't run SUSE since 9.3 (started using linux with SuSE 7.0) - I have lately preferred .deb to .rpm. But heck, I just want this to work and I'll put it away in the corner to forget about.


I'm using the nVidia Ethernet controller currently, but there is also a Marvell GigE controller. The nVidia one isn't clear that it is on the PCI-E bus, but the Marvell is. All the computers are connected using GigE switches. The two that I'm testing with are next to each other on the same switch.
Code:
00:13.0 Bridge: nVidia Corporation CK804 Ethernet Controller (rev a3)
00:14.0 PCI bridge: nVidia Corporation CK804 PCIE Bridge (rev a3)
00:16.0 PCI bridge: nVidia Corporation CK804 PCIE Bridge (rev a3)
00:17.0 PCI bridge: nVidia Corporation CK804 PCIE Bridge (rev a3)
05:00.0 Ethernet controller: Marvell Technology Group Ltd. 88E8053 PCI-E Gigabit Ethernet Controller (rev 22)

I formated the volume as XFS. I don't have any experience with it, so I just did mkfs.xfs /dev/md0 and went on my marry way. The read and write speed really depend on multiple variables, so I've attached IOZone graphs. The speeds are really good for smaller files, and then dip down to <100MB/s when you get above 1GB file sizes (which is what we care about here)...if I'm interpreting the graphs correctly (also why I attached them). I guess I should say that the array is in the process or resyncing right now too, so it may have slowed things down.


Just all peer-to-peer systems, no domain controller (I have XP Home systems that can't join a domain). I didn't tune Samba either, I just kinda went at the defaults (besides turning on a share to be wide-open for a dumping ground).


10.3 Alpha 5 is running 2.6.22-rc4 and according to http://home-tj.org/wiki/index.php/Libata-tj-stable, the PMP patch has been submitted for 2.6.23 consideration. So, I'm not sure that running 10.3 will work out of the box.

About the server:
The processor is a Core 2 Duo at 3.2Ghz (overclocked from 2.93 Ghz) and has 2GB of RAM. FSB is running at 1066Mhz. I'm pretty sure that it is fast enough to handle the IO.

[edit]
Oh, and the system that I'm copying large (700M+) files from is a Vista Ultimate system. Are there any known issues? The Samba version is 3.0.24, which solves the NTLMv2 problems that were reported with Vista back in 2006, so I haven't had to make any policy changes or reg hacks on my Vista system, but there may be something else I'm missing?

When the array finishes syncing, maybe I'll try copying a 1GB+ file from my XP MCE system and a 1GB+ file from my Vista system to see which causes it to hangup.
[/edit]

Ok, lot's of ground to cover! This sound like an nvidia 650 or 680i board right? If so, the nvidia GbE controller is on the PCI-E bus, so no problem there.

I think the SATA ports on these boards are AHCI compatible, so you'll be able to use the latest PMP patch (it just got posted to the libata-tj site a couple of weeks ago) without adding a 2nd controller. The PMP code is slated for going into the mainline kernel in the 2.6.23 branch, but the next 10.3 alpha has it patched into the base suse kernel.

Resyncing raid5 will impact perfomance, but that isn't your problem. You mentioned that you are running Vista on the client. There are apparently lots of issues with vista' networking speed, esp. when UAC and remote differential comparison is on. Measurements have show twice the amount of traffic on the wire when moving large files as compared to an XP system. Try turning off UAC and RDC off on Vista, and see if that improves performance. You can find some references to these issues with google.

As for optimizations, if you have enough DRAM in your system (2GB is plenty), I would run the little optimization script I've attached. It burns about 700M of RAM purely for I/O buffers, but makes XFS scream. The front part of the script has some special stuff to make it be a Suse service plugin, but you can strip that off for other distros. Start it after you bring the filesystems up. You can do it after everything is mounted too, but it needs to be run at startup as these parameters revert to kernel defaults at boot. You may need to change device names and such, but it should be a decent guide for you.

Also, really important - make sure you are using a chunk size of 128K or so for your RAID striping. 256 works slightly better, but is a little overkill. You want a good stride for read/write of large files...

As for SAMBA tuning, look at the howto section here: http://www.comp.hkbu.edu.hk/docs/s/s...wto/speed.html

and samba 3.X tuning info in this pretty good series here: http://www.techworld.com/opsys/featu...?featureid=451. Lots of good info out there by searching google...

Hope this helps!

Thanks,
Mike
Attached Files
File Type: zip diskopt.zip (913 Bytes, 217 views)

Last edited by mikesm; 07-16-2007 at 11:56 AM.
Reply With Quote
  #60  
Old 07-16-2007, 01:31 PM
KJake KJake is offline
Sage Icon
 
Join Date: May 2003
Location: West Michigan
Posts: 1,117
Mike,
Thanks so much for those links! It wasn't until this morning that I thought that Vista was part of the problem. I've disabled things reccommended on that site, setup the disk optimization service and started it, changed Samba settings and restarted it and I recreated my array with a 128K chunk size.

I just copied a file from my Vista client to the new system at what Vista reported to be 30MB/sec over the network...and since I recreated my array, it is still initializing.

I don't think I'm out of the woods yet, but you really helped me out, thanks a ton!

[edit]
OK, I've copied over 100GB of data now and NO problems what-so-ever. I'm taking the plunge and I'm going to start filling it up to migrate some stuff off of my home PC. That should put it at the 1TB used area. Then once I have these other disks clean, I will move them into the new server. Problem with that is that they are 750GB drives and the server has 500GB drives, so if I just grow the current array, I'd be wasting space. My current thought process is to create a new RAID5 with the 750GB drives (md1) and then use UnionFS (http://www.linuxjournal.com/article/7714) to merge them into the same share. The first array will be used until it is full and then writes will move to the second array.
[/edit]

Last edited by KJake; 07-16-2007 at 03:25 PM.
Reply With Quote
Reply


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
NAS - direct record/playback or storage? jlindborg Hardware Support 17 03-19-2007 06:24 AM
NAS or USB HD? WD My Book World II? SAGEaustin Hardware Support 2 02-25-2007 12:08 AM
Slow remote control response while playing game...would NAS drive help? SAGEaustin SageTV Media Extender 1 02-12-2007 10:56 AM
NAS and HD Recordings RayN Hardware Support 18 10-26-2006 01:05 AM
Storage questions, NAS, WOL, lots of stuff! Kirby Hardware Support 36 08-21-2006 06:59 PM


All times are GMT -6. The time now is 08:31 AM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2019, vBulletin Solutions, Inc.
Copyright 2003-2005 SageTV, LLC. All rights reserved.