SageTV Community  

Go Back   SageTV Community > Hardware Support > Hardware Support
Forum Rules FAQs Community Downloads Today's Posts Search

Notices

Hardware Support Discussions related to using various hardware setups with SageTV products. Anything relating to capture cards, remotes, infrared receivers/transmitters, system compatibility or other hardware related problems or suggestions should be posted here.

Reply
 
Thread Tools Search this Thread Display Modes
  #61  
Old 07-16-2007, 03:54 PM
mikesm mikesm is offline
Sage Icon
 
Join Date: Jul 2003
Posts: 1,293
Quote:
Originally Posted by KJake View Post
Mike,
Thanks so much for those links! It wasn't until this morning that I thought that Vista was part of the problem. I've disabled things reccommended on that site, setup the disk optimization service and started it, changed Samba settings and restarted it and I recreated my array with a 128K chunk size.

I just copied a file from my Vista client to the new system at what Vista reported to be 30MB/sec over the network...and since I recreated my array, it is still initializing.

I don't think I'm out of the woods yet, but you really helped me out, thanks a ton!

[edit]
OK, I've copied over 100GB of data now and NO problems what-so-ever. I'm taking the plunge and I'm going to start filling it up to migrate some stuff off of my home PC. That should put it at the 1TB used area. Then once I have these other disks clean, I will move them into the new server. Problem with that is that they are 750GB drives and the server has 500GB drives, so if I just grow the current array, I'd be wasting space. My current thought process is to create a new RAID5 with the 750GB drives (md1) and then use UnionFS (http://www.linuxjournal.com/article/7714) to merge them into the same share. The first array will be used until it is full and then writes will move to the second array.
[/edit]
Glad to help. This stuff needs to be pulled together in one spot as a reference, but until then, I can help you at least as far as I have gone.

Don't use the unionfs. The standard solution in the industry is something called logical volume manager - lvm, or specifically lvm2. This acts a virtual layer above the disks (or RAID in this case), and under the filesystem. It allows you great flexibility in merging raid sets into a common volume, and allows you to shrink them, expand them, etc..., all while the filesystem is live.

Depending on the distro, you may have a nice GUI front end to LVM2, like Suse has, or you can go all the way and use EVMS, which has a single frontend that controls raid, lvm and filesystems. That's what I use, though it's a little clunky to use.

The trick though is to make the first RAID set, create a volume group on top of it, and then make the filesystem on top of the logical volume. Then you can do practically anything you want. You can even stripe across raid sets in lvm if you like, giving you effectively RAID 50 style performance, though as you are finding out, it's overkill for this purpose.

I do exactly what you do in fact. I have 5 320GB SATA disks in RAID5 from a year ago, and 2 months ago, built a new RAID5 out of 5 500GB drives. Since I hadn't used lvm and EVMS the first time, I built the 2nd array, put LVM on top of it, copied the data from the first raid volume to the new one (it was bigger), wiped the first raid5 volume, rebuilt it and added it as a 2nd physical volume under lvm, and now I have a 3 TB filesystem, all under lvm and EVMS. Works great.

Depending on how much flexibility you need, you can wait on some of this, as long as you build a new array with more space than what you had before.

30 MB/s is pretty good for windows networking, but a few more tweaks and you should be able to get to 50 MB/s or so. Complete overkill for media serving, but I sure do get a chuckle out of getting more performance from a cheap home server than I did back in corporate life 5-6 years ago. It's more about bragging rights than anything else, but if you need to push a LOT of data around (backups etc..), it's nice to have that performance.

Thanks,
mike
Reply With Quote
  #62  
Old 07-16-2007, 05:38 PM
KJake KJake is offline
Sage Icon
 
Join Date: May 2003
Location: West Michigan
Posts: 1,117
Quote:
Originally Posted by mikesm View Post
Glad to help. This stuff needs to be pulled together in one spot as a reference, but until then, I can help you at least as far as I have gone.
I've never setup a system this complex before, again, much appreciated.

Quote:
Originally Posted by mikesm View Post
Don't use the unionfs. The standard solution in the industry is something called logical volume manager - lvm, or specifically lvm2. This acts a virtual layer above the disks (or RAID in this case), and under the filesystem. It allows you great flexibility in merging raid sets into a common volume, and allows you to shrink them, expand them, etc..., all while the filesystem is live.
Oh, poop...I just queued up about 900GB to transfer while I left home. I'd rather start fresh the right way, so I guess I'll move it all back to the current system and then work on LVM2. I've used it before, but not in conjunction with mdadm, so I'll have to do some reading...but you're right, LVM is the way to go on this, I forgot about it.

Quote:
Originally Posted by mikesm View Post
Depending on the distro, you may have a nice GUI front end to LVM2, like Suse has, or you can go all the way and use EVMS, which has a single frontend that controls raid, lvm and filesystems. That's what I use, though it's a little clunky to use.
No GUI that I know of, but I know Ubuntu 7.04 added in a lot more GUI tools than 6.06 and 6.10 had.

Quote:
Originally Posted by mikesm View Post
30 MB/s is pretty good for windows networking, but a few more tweaks and you should be able to get to 50 MB/s or so. Complete overkill for media serving, but I sure do get a chuckle out of getting more performance from a cheap home server than I did back in corporate life 5-6 years ago. It's more about bragging rights than anything else, but if you need to push a LOT of data around (backups etc..), it's nice to have that performance.
Agreed. I'm pleased with it for sure. Before installing Linux, I thought I'd try out WHS (Windows Home Server). I am impressed with it in that it is a super slick system for your average consumer...but not very flexible. And I originally liked its disk setup for aggragating disks, until I realized that if you wanted data protection you needed to prepare yourself to loose half your storage. (I know, WHS makes a full backup of the data like a mirror and RAID5 is _not_ a backup, only redundancy).
Anywhoo, when sending files over to it, each transfer looked as if it was actually limited! I could start a transfer and it would maybe get to 6MB/s. I would start another running in parrallel and it would also get close to 6MB/s without effecting the speed of the first transfer.
With all of that, and my want to run some basic Linux apps, I said nuts to it and loaded Ubuntu.

The transfers started speeding up to about 40-50MB/sec on the Vista dialog and using bmon on the Linux console, it spiked around into the 90's quite often. I'm assuming the Vista dialog was just using a safe average.

I'll run more tests later when I have an array that isn't initialzing or recovering so I can get some solid numbers!
Reply With Quote
  #63  
Old 07-16-2007, 06:24 PM
mikesm mikesm is offline
Sage Icon
 
Join Date: Jul 2003
Posts: 1,293
Quote:
Originally Posted by KJake View Post
I've never setup a system this complex before, again, much appreciated.


Oh, poop...I just queued up about 900GB to transfer while I left home. I'd rather start fresh the right way, so I guess I'll move it all back to the current system and then work on LVM2. I've used it before, but not in conjunction with mdadm, so I'll have to do some reading...but you're right, LVM is the way to go on this, I forgot about it.


No GUI that I know of, but I know Ubuntu 7.04 added in a lot more GUI tools than 6.06 and 6.10 had.


Agreed. I'm pleased with it for sure. Before installing Linux, I thought I'd try out WHS (Windows Home Server). I am impressed with it in that it is a super slick system for your average consumer...but not very flexible. And I originally liked its disk setup for aggragating disks, until I realized that if you wanted data protection you needed to prepare yourself to loose half your storage. (I know, WHS makes a full backup of the data like a mirror and RAID5 is _not_ a backup, only redundancy).
Anywhoo, when sending files over to it, each transfer looked as if it was actually limited! I could start a transfer and it would maybe get to 6MB/s. I would start another running in parrallel and it would also get close to 6MB/s without effecting the speed of the first transfer.
With all of that, and my want to run some basic Linux apps, I said nuts to it and loaded Ubuntu.

The transfers started speeding up to about 40-50MB/sec on the Vista dialog and using bmon on the Linux console, it spiked around into the 90's quite often. I'm assuming the Vista dialog was just using a safe average.

I'll run more tests later when I have an array that isn't initialzing or recovering so I can get some solid numbers!
Yeah, ain't this stuff great? hardware raid, eat my shorts!

LVM is pretty complicated, but to do what you want is pretty easy. Most of the LVM guides should have a config that looks similar enough. I think EVMS is included in ubuntu, and I think there are some ubuntu howto's for EVMS, which is nice since it has a gui (evmsgui) to control everything with. If it's in your system and lvm config is a little overwhelming, then try using the evmsgui program.

WHS is a POS. What's the point of building a large array of disks without RAID? And every example of it in use I have seen has had crappy performance to boot. If I wanted a complete webadmin based system, I would go with openfiler, which has full software raid support etc... But you probably couldn't tune it quite the way I like, and this way I can configure apache to run a webdav server too, as well as a few other nicities since it's not an appliance. One of these days, I'll get showanalyzer ported over - there are folks on this forum who run it on linux under wine with good performance, and since the disks are all on the server, it's a nice way to go.

40-50MB/s net through samba? That's more like it. It's nice to push an hour long program over in 30 secs or so.

Why is your array taking so long to init? If you are creating it from scratch, it should come up almost instantly. And making a filesystem doesn't take long either.

Port multipliers are really sweet if you can get them to work well, which they do in Linux. I have 2 of the 5in3 SATA hot swap modules in my system, and I really enjoy looking at the disk status lights when an HD recording is going... I have room for a 3rd in my case. I'll probably wait until 1 TB disks are cheap for my next build, but with 500 GB 7200.10's available for $80-90 a piece, it's just tempting to keep adding.

Make sure you set up md monitoring though. If you hit rough air with a disk going south, and don't know, you could be running in degraded mode for quite awhile without realizing it. Performance doesn't suffer. Then if a 2nd disk goes bad you could be in for a world of hurt. If monitoring is set up right, if you loose a disk, it'll email you to let you know.

I still back up vital stuff (pictures, home movies, etc...) that I can't re-rip or re-record to yet another disk in a different system as a backup, and then rotate that periodically. It's in a hotswap case, and once a month, I swap the drives and take the other drive to my office, just in case.

RAID5 protects you against drive failures etc.., but not against accidental deletes or fires etc... Always have a good backup strategy.

Thx
Mike

Last edited by mikesm; 07-16-2007 at 06:29 PM.
Reply With Quote
  #64  
Old 07-16-2007, 08:18 PM
toricred's Avatar
toricred toricred is offline
Sage Icon
 
Join Date: Jan 2006
Location: Northern New Mexico
Posts: 1,729
How low can the hardware go and still get close to this performance? I don't have a box with PCI-express available, but I'd really like to get a little better performance than using my Sage server housing all the disks on Windows XP MCE. Does going with software RAID in linux buy me much?
Reply With Quote
  #65  
Old 07-16-2007, 10:17 PM
mikesm mikesm is offline
Sage Icon
 
Join Date: Jul 2003
Posts: 1,293
Quote:
Originally Posted by toricred View Post
How low can the hardware go and still get close to this performance? I don't have a box with PCI-express available, but I'd really like to get a little better performance than using my Sage server housing all the disks on Windows XP MCE. Does going with software RAID in linux buy me much?
Well, the hardware can be pretty low end, I run a $60 AMD X2 3800 CPU and an old NF4 motherboard you can get for $50.

If I were to build it again, I'd go with an AMD AM2 CPU, and a cheap motherboard with AHCI SATA ports like the GA-MA69VM-S2 (or something like that) (you can get it for $60). Or if there is a fry's nearby, pick up a nice combo (non VIA chipset of course) motherboard and CPU. They had a special 2 weeks ago which had a 4800 X2 CPU and AN52 motherboard (which had 4 AHCI SATA ports and Gigabit ethernet) for $120 or so. Add 2GB of cheap DDR2 for $60, and that's basically it. Throw on a cheap PCI vid card. You won't be using the graphics.

For lots of disks, use a $80 port multiplier, which turns each AHCI port into 5 SATA ports. No AHCI ports? By a $25 SI3132 PCI-E SATA adapter. You get 2 ports and then 10 disks hung off it.

The main issue is avoiding bottlenecks. You want to make sure the internal chipset has it's SATA's hung off PCI-E for speed (almost all do), and the same for the GbE chipset. PCI only does 133MB/s, which is a big bottleneck for disk, and while you could live with for GbE, best to avoid it completely. It doesn't really cost more.

Software RAID in windows is a POS. Avoid at all costs. Slow, unreliable, and limited flexibility. Linux software raid though is fast, solid and very flexible. I could take the disks from a raid array on one linux system, and move them to another with completely diffferent SATA hardware and the array would be detected and disks put back together in exactly the proper sequence automatically. Makes upgrades easier.

RAID5 is nice because you can take a disk failure and keep going. Linux also supports RAID6, which gives you 2 disk failure survivability. Plus it's nice and fast, esp. with modern hard drives.

Adding a hotswap 5in3 cage for the SATA drives is also really nice. Will cost about $110 or so for a nice one with good ventilation. You also should invest in a nice case with room for drives and good ventilation. Disks hate heat and like lots of air for a long life.

But in any case, if this would be your first linux project, I'd stay away from it. It's quite different than windows.

Thanks,
Mike
Reply With Quote
  #66  
Old 07-17-2007, 12:15 AM
KJake KJake is offline
Sage Icon
 
Join Date: May 2003
Location: West Michigan
Posts: 1,117
Quote:
Originally Posted by mikesm View Post
The trick though is to make the first RAID set, create a volume group on top of it, and then make the filesystem on top of the logical volume. Then you can do practically anything you want. You can even stripe across raid sets in lvm if you like, giving you effectively RAID 50 style performance, though as you are finding out, it's overkill for this purpose.
Ok, I think that I've done this now. See attached pics. To test this out, I also brought home 2- 250GB SATA drives from work. I've tried everything I can think of this late, and I can't seem to get them added to vol1. Does it have to be another RAID5 Region?
Attached Images
File Type: png EVMSVol.png (38.1 KB, 320 views)
File Type: png EVMSFree.png (30.2 KB, 301 views)
Reply With Quote
  #67  
Old 07-17-2007, 01:20 AM
mikesm mikesm is offline
Sage Icon
 
Join Date: Jul 2003
Posts: 1,293
Quote:
Originally Posted by KJake View Post
Ok, I think that I've done this now. See attached pics. To test this out, I also brought home 2- 250GB SATA drives from work. I've tried everything I can think of this late, and I can't seem to get them added to vol1. Does it have to be another RAID5 Region?
Right. RAID5 needs disks that are approx the same, which the 250's aren't. You'll need 3 to make a raid5 set, or you could add them to the main lvm container with no redundancy.

First things first though. You want to take the each disk, and add a segment to it. This will let you put 1 big partition on each disk. You can use the DOS segment manager to do this.

After that, take the big partitions (sda1 etc...) and make a md raid region from them using the raid manager (as you've already done). Available objects, create an lvm2 container. Then add the storage object to the container. It will show up as free space in the container. Then create a logical volume (also a container in EVMS parlence), with the max size. Your free space in the lvm container will move to zero, and the storage container itself will be the full size of the raid region.

After that, make a volume from the lvm container, and format it with xfs. Once that is done, you can mount it and you'll be up and running.

If you want to add the 250 GB disks later, you can get a 3rd one and make raid5, or add them individually (after you put a segment on them (1 big partition) ) if you don't care about redundancy. Once you add it (the raid5 region or the disk segments individually to the container that's used for XFS, EVMS will automatically grow the XFS filesystem (Online no less) to add the space.

Personally, I wouldn't add the 250's to all your nice 500's. Too small to be worth it.

It's best to read over the EVMS user guide and get a grasp of some of the terminology. Also, look at http://www.novamind.net/blog/wizo/wp...06/08/evms.png to see how all this stuff is supposed to play together and the terminology that EVMS uses.

Does this help at all?

Thanks,
Mike
Reply With Quote
  #68  
Old 07-17-2007, 07:36 AM
KJake KJake is offline
Sage Icon
 
Join Date: May 2003
Location: West Michigan
Posts: 1,117
Wow, that was a bit more complex than what I had created last night! I've done it, and here's my problem again...

When I add new disks, I want them to end up under the same mount point, which is why I was looking at Unionfs. From what I can tell here in my tinkering, I can grow the LVM2 container to 2.7TB by adding the other two disks as a RAID0 set, but the EVMS Volume cannot be expanded - and that's where the mountpoint is.

The 250s are just for play because I wanted to see what I would need to do when I do add a new RAID5 set. EVMS sounded flexible enough to me that I should be able to add a RAID0 set to the same volume.

Oh, I did basically get what I wanted when I added the Linked Disks feature and linked the two MD regions together and then created the EVMS Volume from that LD set. I have yet to figure out if I can add more disks to the LD set though.

For now, I left it as pictured in the attachment. And, I'll say that one thing is better with this setup, the init speed is going faster than before.
Attached Images
File Type: png EVMSVol.png (50.2 KB, 285 views)
Reply With Quote
  #69  
Old 07-17-2007, 10:25 AM
mikesm mikesm is offline
Sage Icon
 
Join Date: Jul 2003
Posts: 1,293
Quote:
Originally Posted by KJake View Post
Wow, that was a bit more complex than what I had created last night! I've done it, and here's my problem again...

When I add new disks, I want them to end up under the same mount point, which is why I was looking at Unionfs. From what I can tell here in my tinkering, I can grow the LVM2 container to 2.7TB by adding the other two disks as a RAID0 set, but the EVMS Volume cannot be expanded - and that's where the mountpoint is.

The 250s are just for play because I wanted to see what I would need to do when I do add a new RAID5 set. EVMS sounded flexible enough to me that I should be able to add a RAID0 set to the same volume.

Oh, I did basically get what I wanted when I added the Linked Disks feature and linked the two MD regions together and then created the EVMS Volume from that LD set. I have yet to figure out if I can add more disks to the LD set though.

For now, I left it as pictured in the attachment. And, I'll say that one thing is better with this setup, the init speed is going faster than before.
Linked disks work, but they are EVMS specific. Look at section 16.3 in the EVMS user guide for the specific syntax to do what you want with LVM. You have to pick the specific container (logical volume) to expand. You must add the RAID0 set to the container (volume group in lvm parlance) so it shows up as free space first. Then you can expand the volume. I would recommend sticking with LVM as it's more standard and you are already using it.

Like I said, it's a little clunky, but simpler than using the lvm commands manually. Suse has a GUI that helps manage LVM too, which is nice, nbut I don't think ubuntu does.

PS At least this is easier than getting HD playback in Vista to work. We have all the tools to make the storage volumes work, but only Sage can fix the HD problem. :-)

Thanks,
Mike
Reply With Quote
  #70  
Old 07-17-2007, 12:31 PM
KJake KJake is offline
Sage Icon
 
Join Date: May 2003
Location: West Michigan
Posts: 1,117
I figured it out. I had to use a Compatibility Volume and remove the filesystem before I could expand the volume. I'm not sure why I had to remove the filesystem...which worries me. When I add the filesystem back, I don't use the force option, so existing files won't get over written, but I'm curious if the newly added MD set will be formatted...
Reply With Quote
  #71  
Old 07-17-2007, 01:49 PM
stevech stevech is offline
Sage Icon
 
Join Date: Dec 2005
Posts: 1,643
Quote:
Originally Posted by mikesm View Post
If I were to build it again, I'd go with an AMD AM2 CPU, and a cheap motherboard with AHCI SATA ports like the GA-MA69VM-S2 (or something like that) (you can get it for $60). Or if there is a fry's nearby, pick up a nice combo (non VIA chipset of course) motherboard and CPU.
My comments, based on school of hard knocks:
Via chipset motherboards: I've owned two. Both had big issues. Don't go there.
ECS motherboards: don't go there either. Many wasted hours of mine on this POS.

My latest main PC is a delight: ASUS M2N-MX mobo with fast-enough integrated graphics (I'm not a gamer), GbE, PC6400 RAM, RAID 0/1, nVidia chipset and onboard graphics (new but very good). AMD X2 dual core 4200 CPU. Runs cool. Low cost. Hard to get it to 100% utilization.

I liked this enough to do a new home server. I bought another M2N-MX ($65) and an AMD X2 3800 ($69) and a pair of 500GB SATA-II ($109ea) with nVidia's RAID1. Works very well. Very fast. I tried for hours to overwhelm it with reads/writes to see if there are corruption problems. None so far. RAID1 setup for a small C: for booting with small blocks and V: partition with 64K blocks for video. The CPU heat sink (stock) is room temperature all the time. A pitch for NewEgg. My 3rd order from them. This time, the mobo and CPU and memory I ordered online Sunday and UPS delivered them on Tues. Yes, I am but 90 miles from their warehouse.

Last edited by stevech; 07-17-2007 at 01:54 PM.
Reply With Quote
  #72  
Old 07-17-2007, 02:03 PM
mikesm mikesm is offline
Sage Icon
 
Join Date: Jul 2003
Posts: 1,293
Quote:
Originally Posted by KJake View Post
I figured it out. I had to use a Compatibility Volume and remove the filesystem before I could expand the volume. I'm not sure why I had to remove the filesystem...which worries me. When I add the filesystem back, I don't use the force option, so existing files won't get over written, but I'm curious if the newly added MD set will be formatted...
You shouldn't need to do that. However, XFS can only be expanded with the volume online. If the volume is deactivated, you will not be able to expand it.

The way this should work, is you create the raid0 region, add it to the container so it shows up as free space, then add it as another PV to the main LV, and then expand the filesystem.

Thanks,
Mike
Reply With Quote
  #73  
Old 07-17-2007, 05:39 PM
toricred's Avatar
toricred toricred is offline
Sage Icon
 
Join Date: Jan 2006
Location: Northern New Mexico
Posts: 1,729
Quote:
Originally Posted by mikesm View Post


But in any case, if this would be your first linux project, I'd stay away from it. It's quite different than windows.

Thanks,
Mike
This is not my first linux project. I've set up and maintained several mail, web and jabber servers over the last 4-5 years, but I haven't used the RAID so I wasn't sure how much that was helping. I have a very old system I'd thought about using, but it's not even close to what you describe. For that matter none of the systems in my house are anywhere near that system, but I'll try to talk the wife into freeing some money for this.

By the way, which motherboard are you using?

Last edited by toricred; 07-17-2007 at 05:48 PM.
Reply With Quote
  #74  
Old 07-17-2007, 06:18 PM
mikesm mikesm is offline
Sage Icon
 
Join Date: Jul 2003
Posts: 1,293
Quote:
Originally Posted by toricred View Post
This is not my first linux project. I've set up and maintained several mail, web and jabber servers over the last 4-5 years, but I haven't used the RAID so I wasn't sure how much that was helping. I have a very old system I'd thought about using, but it's not even close to what you describe. For that matter none of the systems in my house are anywhere near that system, but I'll try to talk the wife into freeing some money for this.

By the way, which motherboard are you using?
Sorry if that came across as an insult - I didn't mean that. Not being familiar with linux is no fault. But since you've used it a lot in the past, it wouldn't be hard for you.

You absolutely don't need even new hardware, but here's where I think the rub is. You really want to use SATA disks. Not just because of port multipliers enabling you to cheaply run lots of disks, but because PATA drives are hard to run in an array reliably. That is, if I have an ATA disk on the primary channel, and another disk on the secondary, on the same cable, a lot of faults with one drive can knock out another. 2 drive failures are bad for RAID5, and SATA disks are faster and cheaper too.

Given that, you can add an SATA controller to an older PATA only motherboard, but they generally don't have PCI-E slots, and the only SATA controllers supported for PMP operation are SATA 2 ports. The SI 3112 and 3114 which is a common part on PCI SATA controller boards aren't supported. Neither are the typical SATA 1 ports found on older motherboards that don't have AHCI.

Given all that, you'd spend a decent amount trying stick a reasonable number of drives on an older motherboard, so you are better with a newer one.

Personally, I use an old DFI S939 NF4 motherboard (you can get them for about $40-50 used), and a $20 3132 (Syba) PCI-E SATA2 controller (the nv_sata ports on board the motherboard do raid fine but don't support PMP's). A single core CPU is just fine BTW, but the dual cores are just so cheap now. The reason I tend to not recommend the S939 MB's anymore is that they need DDR1 memory, which is as much as 2-3X that of DDR2 which is dirt cheap. 939 CPU's are also more expensive than AM2's, so everything drives you to a AM2 MB with AHCI SATA ports and PCI-E GbE. It's the cheapest way to get a good platform that is upgradable in the future.

If you shop around or look on ebay for a good used set of boards, I think you can get a very inexpensive but fast system put together. BTW, with linux raid, you can put 4 SATA drives on onboard ports, and move them to an PMP later when you want to expand the array. I know, I did just that. That's what I mean by flexibility over hardware RAID.

Getting a ATI based motherboard with AHCI SATA ports (some even have a 3132 on them additionally) with extra PCI-E slots and onboard video for a low price would be a good foundation. It's a nice AM2 platform that has low system cost that would be good to start out.

I find all this a much better solution than a prepackaged NAS appliance with almost no expansion capability.

Anyways, my config for my server is:

AMD 3800 X2 S939 CPU
DFI NF4 Lanparty MB with PCIE GbE controller
Syba 3132 PCI-E SATA2 controller
1 GB of DDR1 RAM
2 3726 based Port Multipliers
1 CSE-35T-1B SATA 5in3 hot swap array
1 Addonics 5in3 SATA hot swap array
Coolermaster Stacker Case (which can take 2 PSU's)
1 Antec 480W neopower PSU
1 Samsung CD/DVD optical drive
1 old maxtor 40GB system system disk (ATA)
5x320 GB WD SATA drives
5x500 GB Seagate SATA 7200.10 drives

Not that expensive of a system, and produces a screamingly fast 3 TB raid server. The disks are the most expensive part of the server, which is as it should be.

thanks,
Mike

Last edited by mikesm; 07-17-2007 at 06:21 PM.
Reply With Quote
  #75  
Old 07-17-2007, 06:58 PM
toricred's Avatar
toricred toricred is offline
Sage Icon
 
Join Date: Jan 2006
Location: Northern New Mexico
Posts: 1,729
No offense taken at all. I was just trying to show I'm not totally new at linux. I'm glad to hear about the older systems. I would be re-using 2GB of DDR I already have in addition to a Sempron 3000+ if it could handle it so I should be able to save some money there. I think the motherboard and drives would be the main expense. My wife will probably get really upset if I go over $400 for the whole thing and the drives I currently have are a mis-match of sizes and PATA/SATA1. I guess I'll have to replace those.

The reason I want to do this is because I've increased to using 4 SD tuners almost non-stop in addition to compressing at least one SD show using EvilPenguin's utility to get h.264 files and ShowAnalyzer 3 shows at once. This is obviously really taxing my drives when also trying to watch HD.
Reply With Quote
  #76  
Old 07-17-2007, 09:34 PM
KJake KJake is offline
Sage Icon
 
Join Date: May 2003
Location: West Michigan
Posts: 1,117
Quote:
Originally Posted by mikesm View Post
You shouldn't need to do that. However, XFS can only be expanded with the volume online. If the volume is deactivated, you will not be able to expand it.

The way this should work, is you create the raid0 region, add it to the container so it shows up as free space, then add it as another PV to the main LV, and then expand the filesystem.

Thanks,
Mike
bleh...maybe I'll wait for SUSE 10.3. EVMS is kicking my ass. Everytime I reboot, my sda and sdd swap with each other and I get all sorts of errors, basically meaning that I needed to follow this (http://evms.sourceforge.net/install/kernel.html#bdclaim)... I tried and it still is swapping the drives around and I don't feel like compiling yet another kernel (esp since I'm using an RC kernel)....
Reply With Quote
  #77  
Old 07-17-2007, 10:30 PM
mikesm mikesm is offline
Sage Icon
 
Join Date: Jul 2003
Posts: 1,293
Quote:
Originally Posted by toricred View Post
No offense taken at all. I was just trying to show I'm not totally new at linux. I'm glad to hear about the older systems. I would be re-using 2GB of DDR I already have in addition to a Sempron 3000+ if it could handle it so I should be able to save some money there. I think the motherboard and drives would be the main expense. My wife will probably get really upset if I go over $400 for the whole thing and the drives I currently have are a mis-match of sizes and PATA/SATA1. I guess I'll have to replace those.

The reason I want to do this is because I've increased to using 4 SD tuners almost non-stop in addition to compressing at least one SD show using EvilPenguin's utility to get h.264 files and ShowAnalyzer 3 shows at once. This is obviously really taxing my drives when also trying to watch HD.
The sempron is fine (I take it is 939?). If you already have the DDR then a 939 based system makes perfect sense. The drives are a bigger issue, but I wouldn't try and run RAID5 with mismatched disks. That's the bulk of the cost too.

Sounds like you could use another CPU to help with all the tasks you're asking the system to perform...

Thanks,
Mike
Reply With Quote
  #78  
Old 07-17-2007, 10:35 PM
mikesm mikesm is offline
Sage Icon
 
Join Date: Jul 2003
Posts: 1,293
Quote:
Originally Posted by KJake View Post
bleh...maybe I'll wait for SUSE 10.3. EVMS is kicking my ass. Everytime I reboot, my sda and sdd swap with each other and I get all sorts of errors, basically meaning that I needed to follow this (http://evms.sourceforge.net/install/kernel.html#bdclaim)... I tried and it still is swapping the drives around and I don't feel like compiling yet another kernel (esp since I'm using an RC kernel)....
I don't think that's the issue. What kernel are you running? 7.04 should have a late model kernel with all the EVMS patches needed inside. I run a straight 2.6.18-8 kernel with just the PMP patches applied. Try that.

Or if you want to go the Suse route, 10.3alpha6 should be out on thursday, and it has all the needed patches in it.

I had issues with EVMS startup too. You have to make sure LVM is not starting as well. Just EVMS. Otherwise they fight over the devices and no good happens. Did you rename swap and use in fstab as the evms version? /dev/evms/sda2 for swap, /dev/evms/sda3 for /usr, etc.../ You don't need to change root, but if you start evms, it will grab all the non-root disks and fstab has to be modified as noted in the howto.

Thx
Mike
Reply With Quote
  #79  
Old 07-18-2007, 09:07 AM
KJake KJake is offline
Sage Icon
 
Join Date: May 2003
Location: West Michigan
Posts: 1,117
Quote:
Originally Posted by mikesm View Post
I don't think that's the issue. What kernel are you running? 7.04 should have a late model kernel with all the EVMS patches needed inside. I run a straight 2.6.18-8 kernel with just the PMP patches applied. Try that.

Or if you want to go the Suse route, 10.3alpha6 should be out on thursday, and it has all the needed patches in it.

I had issues with EVMS startup too. You have to make sure LVM is not starting as well. Just EVMS. Otherwise they fight over the devices and no good happens. Did you rename swap and use in fstab as the evms version? /dev/evms/sda2 for swap, /dev/evms/sda3 for /usr, etc.../ You don't need to change root, but if you start evms, it will grab all the non-root disks and fstab has to be modified as noted in the howto.

Thx
Mike
Yup, I tried renaming things in the fstab to use /dev/evms and I tried using the exclude in /etc/evms.conf.

I'm using a vanilla kernel that I grabbed from kernel.org. I applied the patch for it and then turned on some 64-bit support and made sure there was support for all the SATA devices and compiled it. I started with the 2.6.17.4 kernel, and had problems. I then tried 2.6.18.1 and couldn't get that to compile either. Then 2.6.22-rc6 worked, so I've been using that.

I was mostly in a bad mood last night, I'm not sure what I'm going to do yet. I could just KISS and make the RAID and go on my marry way without EVMS, but I really want to be able to grow this properly with additional RAID sets that have different sized disks compared to the first RAID set...that's where I'm running into the wall

If the new alpha will be out tomorrow, I'll give it a shot.
Reply With Quote
  #80  
Old 07-18-2007, 06:49 PM
toricred's Avatar
toricred toricred is offline
Sage Icon
 
Join Date: Jan 2006
Location: Northern New Mexico
Posts: 1,729
Quote:
Originally Posted by mikesm View Post
The sempron is fine (I take it is 939?). If you already have the DDR then a 939 based system makes perfect sense. The drives are a bigger issue, but I wouldn't try and run RAID5 with mismatched disks. That's the bulk of the cost too.

Sounds like you could use another CPU to help with all the tasks you're asking the system to perform...

Thanks,
Mike
That machine isn't doing all the tasks, it is currently a client. The system that is handling most of the tasks (except for the transcoding) is a 754 Athlon64 3000+. The transcoding is on a client identical to the one I described. If I add MVPs I think I'll be in big trouble.

P.S. It turns out I got my HD problem fixed by re-installing PureVideo. I had upgraded the Nvidia drivers and somehow it lost the Purevideo. This should by me a few more months and hopefully help me to get the wife to free up some more $$$.
Reply With Quote
Reply


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
NAS - direct record/playback or storage? jlindborg Hardware Support 17 03-19-2007 06:24 AM
NAS or USB HD? WD My Book World II? SAGEaustin Hardware Support 2 02-25-2007 12:08 AM
Slow remote control response while playing game...would NAS drive help? SAGEaustin SageTV Media Extender 1 02-12-2007 10:56 AM
NAS and HD Recordings RayN Hardware Support 18 10-26-2006 01:05 AM
Storage questions, NAS, WOL, lots of stuff! Kirby Hardware Support 36 08-21-2006 06:59 PM


All times are GMT -6. The time now is 01:37 AM.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, vBulletin Solutions Inc.
Copyright 2003-2005 SageTV, LLC. All rights reserved.