Quote:
Depends on how files are written, AFAIK Sage doesn't "reserve" 6GB or whatever for a recording, it allocates space incrementally, so yes, it does "split"
|
No it doesnt. This has nothing to do with SageTV. Only has to do with the OS reclaiming open File System space when a file is deleted. The deletion of the file is what creates the open space. Unlike an SSD where deletion is really simply 'marked for deletion'.
Quote:
There is absolutely no need to do it on a recording or media drive.
|
We are talking past each other. If the media drive is a local drive to the server then absolutely defragging the disk and OS helps. It will speed up both the disk performance as well as the OS's performance moving the data (OS process etc are swapped out just the same and can suffer the same disk related I/O problems)
Quote:
My point though was that if you're recording to a NAS, you want one with good performance. Lots of NAS's have horrible performance, nothing close to a local disk. So if you're recording to a NAS you want to make sure to get a relatively high performance one. Oh and defrag is moot on a NAS, you can't do it unless you're doing iSCSI or something.
|
Of course there are crappy NAS boxes on the market - agreed. However, there are absolutely NAS boxes that can be defragged. And they benefit.
Just like you pointed to - build a Windows Storage Server as a NAS and make it an iSCSI target... or simply access it as a shared folder.... (not many folks use WSS, but you can use Windows 2003 R2 or 2008 R2 for almost the same)
In any case, I'm not talking about an external box. I was referring to an internally RAID array on the server -- which is as fast as it gets period.
Quote:
Try Crystal Diskmark on it with 4k size. Most SSDs can't hit 100MB/sec @4k.
|
Not sure what your point was here - I was referring to actuals on random performance on a RAID 5 Sata Disk array.
Quote:
SageTV made software changes in V7 such that this is no longer needed.
|
It may be true that SageTV improved the overall performance of how and when they access disk in V7. However that has nothing to do with 64K block sizes being unnecessary. My statement hold and are true. And as many will attest -- V6 worked fine with a modern disk with no to move off of 4K.
Quote:
The X7SPA installed in a case with PSU pulls ~20W total.
|
Let's take this at face value. 20W is almost 7x as much power as is needed by the RAID card. Every day, all day. And you still need that extra pain of the added box.
Quote:
I don't know what more there is to maintain, I don't "maintain" my unRAID box any more than the RAID-5 array in my old server. And as far as power goes, I went from 2 boxes: (Sage server with 8 drive RAID-5 array on a 3ware card + 2 caviar blacks + a couple other drives) and a ReadyNAS X7, to 3 boxes
|
I guess this depends on your definition of maintain. 3 boxes is more to maintain than 2 boxes to me (and I think most).
Quote:
And you waste the space on the larger drives, or you have to create a second volume/array on the spare space on the larger drives
|
of course. however, it is not a requirement to run out and get matching drives.
Quote:
and if you upgrade your drives, you have to manually expand or create new volumes. And then expand the partitions.
|
I think you are missing the point - the RAID array should be one big GPT partition. Then of course you need to tell the array to expand - how would the RAID controller you want to do that versus use the space for something else? But if you have one partition, it will automatically expand. So don't put more than 1 partition on .... again use one big GPT partition. There really is no value any more to having multiple partitions anyway.
Quote:
And you kill half the point of a RAID-5 array if you try that.
|
Not sure what you mean. The array will be slightly slower -- and only slightly depending on the number of drives you have ... as the speed is the average across the stripes of the drives ... which is not easily pulled down by one single slightly slower drive.
Quote:
So you're telling me if I have a 4x 1.5TB array (4.5TB total space), and I replace 2 of the drives with 2.0TB drives the array would automatically expand to 5TB? I don't think so, not on any RAID-5 system I've seen. Any individual unit on the card is limited by the smallest drive in the unit.
To expand with larger drives, you'd end up having to make a second RAID 1 volume on the extra 500GB of each drive, and then you'd have two units on the card/drives. When you add your third 2.0TB drive, you'd have to RLM the RAID-1 to RAID-5, but you'd still have two units, and I've not heard of any cards allowing you to merge units.
|
Total usable space is more like 4.2...
Again talking past each other. When you upgrade the array - change all four of the drives and expand. Not one or two... or of course you will have lost space.
All you have to do is hot-swap out each smaller drive with a larger drive and wait for the auto rebuild. Then move to the next drive until you have exhausted all of them. Finally tell the array to expand out. Done.
Quote:
You mentioned SAS expanders as a benefit of a hardware RAID card, that's why I mentioned them. Have you tried to find one? Easier said than done. As far as number of drives, I've got 12TB of redundant storage, how do I do that today with 4 drives? Best you could do is 9TB and that's by buying external drives and ripping the HDDs out of them.
|
I didnt purposely mention SAS... though the card I mentioned does support that. SAS drives are fast but super expensive and not available in the densities I (or you) need.
Of course you can use more than 4 drives - by a controller that supports more (like 8)... Not sure why you need 12TB in one bank... but it is not much more to have a controller with more support...
Quote:
Look, I'm not unfamiliar with RAID cards, I've got one, I've looked at all this, like I said I spend the better part of a year on and off trying to figure out how to solve my running out of storage problems. I tried very hard to avoid unRAID, but in the end, the things I was worried about (more power, an extra box, more managment, higher cost) all turned out to be non-issues.
|
I'm not claiming you are unfamiliar. From the sounds of it you have explored / setup various solutions. As have I... I think I mentioned I run multiple RAID 5 banks on my SageTV server, have networked NAS and ReadyNAS (for other purposes like backup, security video, etc...)
However, my point is that unRAID is not a cheaper solution, it is more complex for a novice to support more than 1 box as you are doing, and local storage always has the best performance with RAID. I think we should agree on these points. And these are really the only important points I was trying to make in this very long long thread.
Quote:
My point is the cards I've looked at for the features I required of them were Areca/LSI/3ware cards and they were not cheap (like the $150 you mention), think $300-400 for a 4 port. And as I've illustrated, 4 drives isn't enough, I'd need at least 8, in fact I was looking at SAS cards (eg 3ware 9690-4I), which comes out to be about $350 + $225 for the SAS expander, so that's $575 just in RAID hardware, or $500 or so for an 8-port card.
|
As I said, I pay $300-$400 for a good card. But the features of that card are not required for success for a more novice individual. And those features are also much better than what you get from an unRAID. That's why the lower $150 card is a better comparison.
Again, I wouldnt touch SAS drives now.... just not there density and $-wise.
However, the best solution now is like a 6405 card from Adaptec. These cards allow you to use both the SATA and a SSD in the same array - it uses the SSD as a giant super fast cache for the SATA based RAID 5.... performance is damn remarkable... but then again this is major over kill. With > 200MB/s sustained read performance nad > 100MB/s sustained write performance with a SATA based RAID 5, who hell need more for HD Videos even recording/watching 5-10 at a time? Nobody...