|
Hardware Support Discussions related to using various hardware setups with SageTV products. Anything relating to capture cards, remotes, infrared receivers/transmitters, system compatibility or other hardware related problems or suggestions should be posted here. |
|
Thread Tools | Search this Thread | Display Modes |
#61
|
|||
|
|||
Quote:
Don't use the unionfs. The standard solution in the industry is something called logical volume manager - lvm, or specifically lvm2. This acts a virtual layer above the disks (or RAID in this case), and under the filesystem. It allows you great flexibility in merging raid sets into a common volume, and allows you to shrink them, expand them, etc..., all while the filesystem is live. Depending on the distro, you may have a nice GUI front end to LVM2, like Suse has, or you can go all the way and use EVMS, which has a single frontend that controls raid, lvm and filesystems. That's what I use, though it's a little clunky to use. The trick though is to make the first RAID set, create a volume group on top of it, and then make the filesystem on top of the logical volume. Then you can do practically anything you want. You can even stripe across raid sets in lvm if you like, giving you effectively RAID 50 style performance, though as you are finding out, it's overkill for this purpose. I do exactly what you do in fact. I have 5 320GB SATA disks in RAID5 from a year ago, and 2 months ago, built a new RAID5 out of 5 500GB drives. Since I hadn't used lvm and EVMS the first time, I built the 2nd array, put LVM on top of it, copied the data from the first raid volume to the new one (it was bigger), wiped the first raid5 volume, rebuilt it and added it as a 2nd physical volume under lvm, and now I have a 3 TB filesystem, all under lvm and EVMS. Works great. Depending on how much flexibility you need, you can wait on some of this, as long as you build a new array with more space than what you had before. 30 MB/s is pretty good for windows networking, but a few more tweaks and you should be able to get to 50 MB/s or so. Complete overkill for media serving, but I sure do get a chuckle out of getting more performance from a cheap home server than I did back in corporate life 5-6 years ago. It's more about bragging rights than anything else, but if you need to push a LOT of data around (backups etc..), it's nice to have that performance. Thanks, mike |
#62
|
||||
|
||||
Quote:
Quote:
Quote:
Quote:
Anywhoo, when sending files over to it, each transfer looked as if it was actually limited! I could start a transfer and it would maybe get to 6MB/s. I would start another running in parrallel and it would also get close to 6MB/s without effecting the speed of the first transfer. With all of that, and my want to run some basic Linux apps, I said nuts to it and loaded Ubuntu. The transfers started speeding up to about 40-50MB/sec on the Vista dialog and using bmon on the Linux console, it spiked around into the 90's quite often. I'm assuming the Vista dialog was just using a safe average. I'll run more tests later when I have an array that isn't initialzing or recovering so I can get some solid numbers! |
#63
|
|||
|
|||
Quote:
LVM is pretty complicated, but to do what you want is pretty easy. Most of the LVM guides should have a config that looks similar enough. I think EVMS is included in ubuntu, and I think there are some ubuntu howto's for EVMS, which is nice since it has a gui (evmsgui) to control everything with. If it's in your system and lvm config is a little overwhelming, then try using the evmsgui program. WHS is a POS. What's the point of building a large array of disks without RAID? And every example of it in use I have seen has had crappy performance to boot. If I wanted a complete webadmin based system, I would go with openfiler, which has full software raid support etc... But you probably couldn't tune it quite the way I like, and this way I can configure apache to run a webdav server too, as well as a few other nicities since it's not an appliance. One of these days, I'll get showanalyzer ported over - there are folks on this forum who run it on linux under wine with good performance, and since the disks are all on the server, it's a nice way to go. 40-50MB/s net through samba? That's more like it. It's nice to push an hour long program over in 30 secs or so. Why is your array taking so long to init? If you are creating it from scratch, it should come up almost instantly. And making a filesystem doesn't take long either. Port multipliers are really sweet if you can get them to work well, which they do in Linux. I have 2 of the 5in3 SATA hot swap modules in my system, and I really enjoy looking at the disk status lights when an HD recording is going... I have room for a 3rd in my case. I'll probably wait until 1 TB disks are cheap for my next build, but with 500 GB 7200.10's available for $80-90 a piece, it's just tempting to keep adding. Make sure you set up md monitoring though. If you hit rough air with a disk going south, and don't know, you could be running in degraded mode for quite awhile without realizing it. Performance doesn't suffer. Then if a 2nd disk goes bad you could be in for a world of hurt. If monitoring is set up right, if you loose a disk, it'll email you to let you know. I still back up vital stuff (pictures, home movies, etc...) that I can't re-rip or re-record to yet another disk in a different system as a backup, and then rotate that periodically. It's in a hotswap case, and once a month, I swap the drives and take the other drive to my office, just in case. RAID5 protects you against drive failures etc.., but not against accidental deletes or fires etc... Always have a good backup strategy. Thx Mike Last edited by mikesm; 07-16-2007 at 06:29 PM. |
#64
|
||||
|
||||
How low can the hardware go and still get close to this performance? I don't have a box with PCI-express available, but I'd really like to get a little better performance than using my Sage server housing all the disks on Windows XP MCE. Does going with software RAID in linux buy me much?
|
#65
|
|||
|
|||
Quote:
If I were to build it again, I'd go with an AMD AM2 CPU, and a cheap motherboard with AHCI SATA ports like the GA-MA69VM-S2 (or something like that) (you can get it for $60). Or if there is a fry's nearby, pick up a nice combo (non VIA chipset of course) motherboard and CPU. They had a special 2 weeks ago which had a 4800 X2 CPU and AN52 motherboard (which had 4 AHCI SATA ports and Gigabit ethernet) for $120 or so. Add 2GB of cheap DDR2 for $60, and that's basically it. Throw on a cheap PCI vid card. You won't be using the graphics. For lots of disks, use a $80 port multiplier, which turns each AHCI port into 5 SATA ports. No AHCI ports? By a $25 SI3132 PCI-E SATA adapter. You get 2 ports and then 10 disks hung off it. The main issue is avoiding bottlenecks. You want to make sure the internal chipset has it's SATA's hung off PCI-E for speed (almost all do), and the same for the GbE chipset. PCI only does 133MB/s, which is a big bottleneck for disk, and while you could live with for GbE, best to avoid it completely. It doesn't really cost more. Software RAID in windows is a POS. Avoid at all costs. Slow, unreliable, and limited flexibility. Linux software raid though is fast, solid and very flexible. I could take the disks from a raid array on one linux system, and move them to another with completely diffferent SATA hardware and the array would be detected and disks put back together in exactly the proper sequence automatically. Makes upgrades easier. RAID5 is nice because you can take a disk failure and keep going. Linux also supports RAID6, which gives you 2 disk failure survivability. Plus it's nice and fast, esp. with modern hard drives. Adding a hotswap 5in3 cage for the SATA drives is also really nice. Will cost about $110 or so for a nice one with good ventilation. You also should invest in a nice case with room for drives and good ventilation. Disks hate heat and like lots of air for a long life. But in any case, if this would be your first linux project, I'd stay away from it. It's quite different than windows. Thanks, Mike |
#66
|
|||
|
|||
Quote:
|
#67
|
|||
|
|||
Quote:
First things first though. You want to take the each disk, and add a segment to it. This will let you put 1 big partition on each disk. You can use the DOS segment manager to do this. After that, take the big partitions (sda1 etc...) and make a md raid region from them using the raid manager (as you've already done). Available objects, create an lvm2 container. Then add the storage object to the container. It will show up as free space in the container. Then create a logical volume (also a container in EVMS parlence), with the max size. Your free space in the lvm container will move to zero, and the storage container itself will be the full size of the raid region. After that, make a volume from the lvm container, and format it with xfs. Once that is done, you can mount it and you'll be up and running. If you want to add the 250 GB disks later, you can get a 3rd one and make raid5, or add them individually (after you put a segment on them (1 big partition) ) if you don't care about redundancy. Once you add it (the raid5 region or the disk segments individually to the container that's used for XFS, EVMS will automatically grow the XFS filesystem (Online no less) to add the space. Personally, I wouldn't add the 250's to all your nice 500's. Too small to be worth it. It's best to read over the EVMS user guide and get a grasp of some of the terminology. Also, look at http://www.novamind.net/blog/wizo/wp...06/08/evms.png to see how all this stuff is supposed to play together and the terminology that EVMS uses. Does this help at all? Thanks, Mike |
#68
|
|||
|
|||
Wow, that was a bit more complex than what I had created last night! I've done it, and here's my problem again...
When I add new disks, I want them to end up under the same mount point, which is why I was looking at Unionfs. From what I can tell here in my tinkering, I can grow the LVM2 container to 2.7TB by adding the other two disks as a RAID0 set, but the EVMS Volume cannot be expanded - and that's where the mountpoint is. The 250s are just for play because I wanted to see what I would need to do when I do add a new RAID5 set. EVMS sounded flexible enough to me that I should be able to add a RAID0 set to the same volume. Oh, I did basically get what I wanted when I added the Linked Disks feature and linked the two MD regions together and then created the EVMS Volume from that LD set. I have yet to figure out if I can add more disks to the LD set though. For now, I left it as pictured in the attachment. And, I'll say that one thing is better with this setup, the init speed is going faster than before. |
#69
|
|||
|
|||
Quote:
Like I said, it's a little clunky, but simpler than using the lvm commands manually. Suse has a GUI that helps manage LVM too, which is nice, nbut I don't think ubuntu does. PS At least this is easier than getting HD playback in Vista to work. We have all the tools to make the storage volumes work, but only Sage can fix the HD problem. :-) Thanks, Mike |
#70
|
|||
|
|||
I figured it out. I had to use a Compatibility Volume and remove the filesystem before I could expand the volume. I'm not sure why I had to remove the filesystem...which worries me. When I add the filesystem back, I don't use the force option, so existing files won't get over written, but I'm curious if the newly added MD set will be formatted...
|
#71
|
|||
|
|||
Quote:
Via chipset motherboards: I've owned two. Both had big issues. Don't go there. ECS motherboards: don't go there either. Many wasted hours of mine on this POS. My latest main PC is a delight: ASUS M2N-MX mobo with fast-enough integrated graphics (I'm not a gamer), GbE, PC6400 RAM, RAID 0/1, nVidia chipset and onboard graphics (new but very good). AMD X2 dual core 4200 CPU. Runs cool. Low cost. Hard to get it to 100% utilization. I liked this enough to do a new home server. I bought another M2N-MX ($65) and an AMD X2 3800 ($69) and a pair of 500GB SATA-II ($109ea) with nVidia's RAID1. Works very well. Very fast. I tried for hours to overwhelm it with reads/writes to see if there are corruption problems. None so far. RAID1 setup for a small C: for booting with small blocks and V: partition with 64K blocks for video. The CPU heat sink (stock) is room temperature all the time. A pitch for NewEgg. My 3rd order from them. This time, the mobo and CPU and memory I ordered online Sunday and UPS delivered them on Tues. Yes, I am but 90 miles from their warehouse. Last edited by stevech; 07-17-2007 at 01:54 PM. |
#72
|
|||
|
|||
Quote:
The way this should work, is you create the raid0 region, add it to the container so it shows up as free space, then add it as another PV to the main LV, and then expand the filesystem. Thanks, Mike |
#73
|
||||
|
||||
Quote:
By the way, which motherboard are you using? Last edited by toricred; 07-17-2007 at 05:48 PM. |
#74
|
|||
|
|||
Quote:
You absolutely don't need even new hardware, but here's where I think the rub is. You really want to use SATA disks. Not just because of port multipliers enabling you to cheaply run lots of disks, but because PATA drives are hard to run in an array reliably. That is, if I have an ATA disk on the primary channel, and another disk on the secondary, on the same cable, a lot of faults with one drive can knock out another. 2 drive failures are bad for RAID5, and SATA disks are faster and cheaper too. Given that, you can add an SATA controller to an older PATA only motherboard, but they generally don't have PCI-E slots, and the only SATA controllers supported for PMP operation are SATA 2 ports. The SI 3112 and 3114 which is a common part on PCI SATA controller boards aren't supported. Neither are the typical SATA 1 ports found on older motherboards that don't have AHCI. Given all that, you'd spend a decent amount trying stick a reasonable number of drives on an older motherboard, so you are better with a newer one. Personally, I use an old DFI S939 NF4 motherboard (you can get them for about $40-50 used), and a $20 3132 (Syba) PCI-E SATA2 controller (the nv_sata ports on board the motherboard do raid fine but don't support PMP's). A single core CPU is just fine BTW, but the dual cores are just so cheap now. The reason I tend to not recommend the S939 MB's anymore is that they need DDR1 memory, which is as much as 2-3X that of DDR2 which is dirt cheap. 939 CPU's are also more expensive than AM2's, so everything drives you to a AM2 MB with AHCI SATA ports and PCI-E GbE. It's the cheapest way to get a good platform that is upgradable in the future. If you shop around or look on ebay for a good used set of boards, I think you can get a very inexpensive but fast system put together. BTW, with linux raid, you can put 4 SATA drives on onboard ports, and move them to an PMP later when you want to expand the array. I know, I did just that. That's what I mean by flexibility over hardware RAID. Getting a ATI based motherboard with AHCI SATA ports (some even have a 3132 on them additionally) with extra PCI-E slots and onboard video for a low price would be a good foundation. It's a nice AM2 platform that has low system cost that would be good to start out. I find all this a much better solution than a prepackaged NAS appliance with almost no expansion capability. Anyways, my config for my server is: AMD 3800 X2 S939 CPU DFI NF4 Lanparty MB with PCIE GbE controller Syba 3132 PCI-E SATA2 controller 1 GB of DDR1 RAM 2 3726 based Port Multipliers 1 CSE-35T-1B SATA 5in3 hot swap array 1 Addonics 5in3 SATA hot swap array Coolermaster Stacker Case (which can take 2 PSU's) 1 Antec 480W neopower PSU 1 Samsung CD/DVD optical drive 1 old maxtor 40GB system system disk (ATA) 5x320 GB WD SATA drives 5x500 GB Seagate SATA 7200.10 drives Not that expensive of a system, and produces a screamingly fast 3 TB raid server. The disks are the most expensive part of the server, which is as it should be. thanks, Mike Last edited by mikesm; 07-17-2007 at 06:21 PM. |
#75
|
||||
|
||||
No offense taken at all. I was just trying to show I'm not totally new at linux. I'm glad to hear about the older systems. I would be re-using 2GB of DDR I already have in addition to a Sempron 3000+ if it could handle it so I should be able to save some money there. I think the motherboard and drives would be the main expense. My wife will probably get really upset if I go over $400 for the whole thing and the drives I currently have are a mis-match of sizes and PATA/SATA1. I guess I'll have to replace those.
The reason I want to do this is because I've increased to using 4 SD tuners almost non-stop in addition to compressing at least one SD show using EvilPenguin's utility to get h.264 files and ShowAnalyzer 3 shows at once. This is obviously really taxing my drives when also trying to watch HD. |
#76
|
|||
|
|||
Quote:
|
#77
|
|||
|
|||
Quote:
Sounds like you could use another CPU to help with all the tasks you're asking the system to perform... Thanks, Mike |
#78
|
|||
|
|||
Quote:
Or if you want to go the Suse route, 10.3alpha6 should be out on thursday, and it has all the needed patches in it. I had issues with EVMS startup too. You have to make sure LVM is not starting as well. Just EVMS. Otherwise they fight over the devices and no good happens. Did you rename swap and use in fstab as the evms version? /dev/evms/sda2 for swap, /dev/evms/sda3 for /usr, etc.../ You don't need to change root, but if you start evms, it will grab all the non-root disks and fstab has to be modified as noted in the howto. Thx Mike |
#79
|
|||
|
|||
Quote:
I'm using a vanilla kernel that I grabbed from kernel.org. I applied the patch for it and then turned on some 64-bit support and made sure there was support for all the SATA devices and compiled it. I started with the 2.6.17.4 kernel, and had problems. I then tried 2.6.18.1 and couldn't get that to compile either. Then 2.6.22-rc6 worked, so I've been using that. I was mostly in a bad mood last night, I'm not sure what I'm going to do yet. I could just KISS and make the RAID and go on my marry way without EVMS, but I really want to be able to grow this properly with additional RAID sets that have different sized disks compared to the first RAID set...that's where I'm running into the wall If the new alpha will be out tomorrow, I'll give it a shot. |
#80
|
||||
|
||||
Quote:
P.S. It turns out I got my HD problem fixed by re-installing PureVideo. I had upgraded the Nvidia drivers and somehow it lost the Purevideo. This should by me a few more months and hopefully help me to get the wife to free up some more $$$. |
Currently Active Users Viewing This Thread: 1 (0 members and 1 guests) | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
NAS - direct record/playback or storage? | jlindborg | Hardware Support | 17 | 03-19-2007 06:24 AM |
NAS or USB HD? WD My Book World II? | SAGEaustin | Hardware Support | 2 | 02-25-2007 12:08 AM |
Slow remote control response while playing game...would NAS drive help? | SAGEaustin | SageTV Media Extender | 1 | 02-12-2007 10:56 AM |
NAS and HD Recordings | RayN | Hardware Support | 18 | 10-26-2006 01:05 AM |
Storage questions, NAS, WOL, lots of stuff! | Kirby | Hardware Support | 36 | 08-21-2006 06:59 PM |