SageTV Community  

Go Back   SageTV Community > Hardware Support > Hardware Support
Forum Rules FAQs Community Downloads Today's Posts Search

Notices

Hardware Support Discussions related to using various hardware setups with SageTV products. Anything relating to capture cards, remotes, infrared receivers/transmitters, system compatibility or other hardware related problems or suggestions should be posted here.

Reply
 
Thread Tools Search this Thread Display Modes
  #21  
Old 12-30-2009, 01:29 AM
Greg Greg is offline
Sage Advanced User
 
Join Date: Jan 2009
Posts: 154
Quote:
Originally Posted by GKusnick View Post
Yes, hard drives have buffers, which can help smooth out occasional hiccups. But if a file is fragmented to the point where the average fragment contains fewer milliseconds of video than the average seek time between fragments, then buffering can't make up the difference (short of buffering the entire file, and no drive has a buffer that big). That's the point of large clusters: to keep the fragment size well above that seek-time threshold, no matter how fragmented the file gets.
Greg,

Thanks for that very clear explanation.........makes perfect sense.

Now wouldn't that be cool if you can buffer up the whole show or movie!

Greg
Reply With Quote
  #22  
Old 12-30-2009, 09:01 AM
stanger89's Avatar
stanger89 stanger89 is offline
SageTVaholic
 
Join Date: May 2003
Location: Marion, IA
Posts: 15,188
Quote:
Originally Posted by mr_lore View Post
See setup below, I've got all kinds of HD flying all over the place and not a 64k cluster in sight, never had any issues.
It's not a guarantee you will have problems without using 64k clusters, but it is more likely. That's why it's recommended (but not required) that you use them.

Quote:
Originally Posted by stevech View Post
Also, I think the I/O's per second reduces with the use of 64K blocks for large files.
Maybe, but it also dramatically reduces the number required to read/write a given amount of data since more is read/written with each I/O.
Reply With Quote
  #23  
Old 01-01-2010, 02:00 AM
brainbone brainbone is offline
Sage Expert
 
Join Date: Oct 2006
Posts: 624
I think what we're really talking about here is an insufficient read-ahead (or write) buffer in SageTV. The bit rate of 720p/1080i mpeg2 isn't to the point where a modern hard disk with any block size should have an issue. A sufficient buffer should be able to compensate for any fragmentation or other hiccup that that may occur during playback/recording.

That said, with SageTV and higher bit-rate material, a 64k block size does seem to help -- it's just another "feature" of SageTV that bothers me.
Reply With Quote
  #24  
Old 01-01-2010, 03:17 AM
GKusnick's Avatar
GKusnick GKusnick is offline
SageTVaholic
 
Join Date: Dec 2005
Posts: 5,083
Quote:
Originally Posted by brainbone View Post
I think what we're really talking about here is an insufficient read-ahead (or write) buffer in SageTV. The bit rate of 720p/1080i mpeg2 isn't to the point where a modern hard disk with any block size should have an issue. A sufficient buffer should be able to compensate for any fragmentation or other hiccup that that may occur during playback/recording.
It's true that for contiguous files, disk throughput is more than sufficient to keep up with video bitrates, so that cluster size doesn't matter. But for fragmented files it's a different story, because the seek time between fragments drastically lowers throughput. If the net throughput, including seek time, falls below the video bitrate, then the buffer is being emptied faster than it can be filled, and you can't compensate for that by making the buffer bigger. But you can avoid that situation by making the clusters bigger so that fewer seeks are needed and the throughput doesn't suffer as much from fragmentation.
__________________
-- Greg
Reply With Quote
  #25  
Old 01-01-2010, 10:11 AM
stanger89's Avatar
stanger89 stanger89 is offline
SageTVaholic
 
Join Date: May 2003
Location: Marion, IA
Posts: 15,188
Exactly, even a Velociraptor (10k drive) drops to only 1.5MB/sec when you get to 4k random writes:
http://www.anandtech.com/storage/sho...spx?i=3667&p=6
Reply With Quote
  #26  
Old 01-01-2010, 11:33 AM
mozerd mozerd is offline
Sage User
 
Join Date: Nov 2009
Location: Nepean, Ontario Canada
Posts: 18
Perhaps a Tool like Diskeeper that dynamically prevents fragmentation before it happens is a better solution.
Reply With Quote
  #27  
Old 01-01-2010, 12:03 PM
paulbeers paulbeers is offline
SageTVaholic
 
Join Date: Jun 2005
Posts: 2,550
Quote:
Originally Posted by mozerd View Post
Perhaps a Tool like Diskeeper that dynamically prevents fragmentation before it happens is a better solution.
I'm not sure how running a program on your pc that actually adds additional disk read/writes (in order to keep everything "together") is better than just making sure your recording drives/partitions are set to 64KB blocks. Also, one is free (since it is included with the OS) and one costs $40. Just saying.
__________________
Sage Server: AMD Athlon II 630, Asrock 785G motherboard, 3GB of RAM, 500GB OS HD in RAID 1 and 2 - 750GB Recording Drives, HDHomerun, Avermedia HD Duet & 2-HDPVRs, and 9.0TB storage in RAID 5 via Dell Perc 5i for DVD storage
Source: Clear QAM and OTA for locals, 2-DishNetwork VIP211's
Clients: 2 Sage HD300's, 2 Sage HD200's, 2 Sage HD100's, 1 MediaMVP, and 1 Placeshifter
Reply With Quote
  #28  
Old 01-01-2010, 12:13 PM
GKusnick's Avatar
GKusnick GKusnick is offline
SageTVaholic
 
Join Date: Dec 2005
Posts: 5,083
Quote:
Originally Posted by mozerd View Post
Perhaps a Tool like Diskeeper that dynamically prevents fragmentation before it happens is a better solution.
Better how? Formatting a new partition in 64K clusters is something you do once and then forget about, and it imposes no extra runtime overhead during recording to avoid fragmentation.

In fact I'd argue that fragmentation is what you want when recording multiple streams to the same disk at the same time. You don't want the head seeking all over the disk trying to keep each file nice and tidy, because seeking kills throughput. You want each block written as quickly as possible to the next sequential free cluster, regardless of which file it belongs to, in order to maximize throughput and minimize buffer overruns and damaged recordings. Any "intelligent" process that interferes with that is likely to limit the number of simultaneous recordings you can make.
__________________
-- Greg
Reply With Quote
  #29  
Old 01-02-2010, 05:50 AM
mozerd mozerd is offline
Sage User
 
Join Date: Nov 2009
Location: Nepean, Ontario Canada
Posts: 18
I have been using Diskeeper for the past 12 years and have found that it does what it is designed to do very well -- all my systems that are running Diskeeper are generally much more responsive.

Windows NT, Windows 2000, Windows XP Pro, Windows Vista and Windows 7 are all pre-emptive multi-tasking operating systems --- applications that are properly written to take advantage of that capability will work much more efficiently especially if more than one processor is involved -- Diskeeper is one such application. I do not know if SageTV is one such application.

I am very new to SageTV so I currently have no idea if SageTV and Diskeeper will produce a Superior outcome from a performance perspective -- I suspect that it will but only experience will determine that.

In my post I stated that perhaps Diskeeper is a better solution -- its free to try.

Diskeeper features that may be of interest.
Reply With Quote
  #30  
Old 01-02-2010, 06:16 AM
JetreL's Avatar
JetreL JetreL is offline
Sage Aficionado
 
Join Date: Jun 2008
Location: Charlotte, NC
Posts: 388
Diskeeper is a great product. The problem with running Diskeeper on your recording drives is to could cause them to fail sooner because of extra heat and IO created when moving such big files around a nearly full drive.

The best solution is resize the drives to 64k blocks, run Diskeeper periodically (not activly) and leave extra space on you recording drive so there is contigious space for new files / defragging.
Reply With Quote
  #31  
Old 01-02-2010, 07:21 AM
mozerd mozerd is offline
Sage User
 
Join Date: Nov 2009
Location: Nepean, Ontario Canada
Posts: 18
IMO using a large cluster size [64k blocks] will decreases the amount of fragmentation as fewer locations are needed to write the data which will in turn also increases write performance and eventually read times -- so it does make sense to have SageTV write/read its media files on hard disks dedicated to large media files and Keep the OS on its own dedicated drive.

Diskeeper effectively solves the issue of file fragmentation -- Windows unfortunately does not do a good job keeping file fragmentation to a minimum -- for those that do not want to fool around with cluster sizes -- usually after the fact -- I suspect that Diskeeper may be a good solution.
Reply With Quote
  #32  
Old 01-02-2010, 08:21 AM
pknowles's Avatar
pknowles pknowles is offline
Sage User
 
Join Date: Mar 2007
Location: Huntingtown, MD
Posts: 65
When I first started using Sage in 2005 I used 4k blocks and after 8 months disk fragmentation started to cause skipping on play back. I then ordered a 500GB disk, formatted to 64k blocks, and have never had an issue and never defraged it.

The issue really comes down to can the hard drive seek and read faster than the data stream required to smoothly play the content. If you have a 10GB/hr recording then that file is 2.778 KB/ms (assuming the stream is evenly distributed). On my Seagate 7200.11 drive, the spec sheets lists the random read seek time as <8.5ms. If I take 8.5ms as the worst case (total fragmentation with clusters in the worst spots on the disk, which is very unlikely) then the hard drive can roughly keep up if the minimum block size is larger than 23.613KB (2.778*8.5). So for this drive if I format to 32KB blocks (the next available cluster size above 23.613KB) the disk can be totally fragmented and still not be a bottle neck. You can factor in miss reads or any other small additional issues that will increase the minimum block size if you want. It's a simplified example, but I think it illustrates the point that 4KB blocks won't cut it unless you maintain very little fragmentation on the disk.

BTW I used 1GB=1000MB instead of 1024MB, the assumptions above have more variation then the 2.5% error from being lazy.
__________________
Phil K.
Reply With Quote
  #33  
Old 01-02-2010, 12:29 PM
brainbone brainbone is offline
Sage Expert
 
Join Date: Oct 2006
Posts: 624
Quote:
Originally Posted by GKusnick View Post
If the net throughput, including seek time, falls below the video bitrate, then the buffer is being emptied faster than it can be filled, and you can't compensate for that by making the buffer bigger.
You can compensate with a buffer up to a point. If the file system is so badly fragmented that the buffer never gets a change to re-charge, then yes, you could have a problem -- but in testing Windows Media Center (in Windows 7), using only a single disk for OS and recordings (from multiple tuners), I never seemed to run into the issue (I still prefer SageTV, even with the 64k cluster requirement, for other reasons).

How MCE is able to apparently avoid severe fragmentation using a single disk with 4k clusters, and SageTV is not, I'm not sure. Perhaps SageTV doesn't follow some of these recommendations?

Last edited by brainbone; 01-02-2010 at 12:40 PM.
Reply With Quote
  #34  
Old 01-02-2010, 02:09 PM
GKusnick's Avatar
GKusnick GKusnick is offline
SageTVaholic
 
Join Date: Dec 2005
Posts: 5,083
Quote:
Originally Posted by brainbone View Post
Perhaps SageTV doesn't follow some of these recommendations?
From the linked document (emphasis added):

Quote:
The recommendations are intended to avoid fragmentation at the cost of decreased performance while writing to a file. The assumption is that the file is written to less often, and read more often; hence optimizing for read performance is the desired goal.
But as I've already argued in post #28 above, this is not the desired goal in Sage when recording multiple streams at once. In that case the overriding goal is to capture the incoming real-time data streams without error. Write performance matters for that because there isn't enough RAM in the system to buffer the incoming data indefinitely. So the smart tradeoff is to accept some degree of fragmentation in order to maintain high write throughput, and use large clusters to ensure smooth playback of fragmented files.

I'm not an MCE user and don't know how it handles the case of four or five simultaneous HD recordings (assuming it can handle that many at all), but my guess is that it probably relaxes its no-fragmentation rule under heavy recording loads.
__________________
-- Greg
Reply With Quote
  #35  
Old 01-02-2010, 03:15 PM
stanger89's Avatar
stanger89 stanger89 is offline
SageTVaholic
 
Join Date: May 2003
Location: Marion, IA
Posts: 15,188
That paper also mentiones that the NTFS caching/allocation scheme is not documented. However being that MS created both NTFS and WMC, it's not a big leap to think that maybe WMC was made with non-public proprietary knowledge of NTFS and/or with undocumented hooks to avoid problems.

Of course there's another thing to consider, Sage doesn't place limits on clients or tuners, so you can end up with massive SageTV systems that would be impossible in WMC. WMC on the other hand is (or at least has been) more focused on the single user dual tuner environment where you can get by with less optomization than you can in a larger system.
Reply With Quote
  #36  
Old 01-02-2010, 06:19 PM
brainbone brainbone is offline
Sage Expert
 
Join Date: Oct 2006
Posts: 624
Quote:
Originally Posted by GKusnick View Post
But as I've already argued in post #28 above, this is not the desired goal in Sage when recording multiple streams at once. In that case the overriding goal is to capture the incoming real-time data streams without error.
The basic idea, as I understand it, is to buffer more data in memory, so larger chunks are handed off during each write, allowing larger chunks of the file system to be allocated on each write. This wouldn't really "slow down" write performance -- just delay it. The faster the buffer fills to the desired level, the sooner one would write it out to disk. While one buffer is being flushed to disk, another can begin to fill. Similar optimizations are used in IP packet scheduling to increase performance by lowering packet fragmentation, at the cost of some latency. This small amount of latency for an increase in write buffering would likely go unnoticed in SageTV.

Windows 7 ultimate supports up to 4 digital tuners. I tested with 2 digital (HD streams) and 2 analog, without issue. SageTV did have issues without 64k clusters using the same 4 tuners.

Quote:
Originally Posted by stanger89 View Post
That paper also mentiones that the NTFS caching/allocation scheme is not documented. However being that MS created both NTFS and WMC, it's not a big leap to think that maybe WMC was made with non-public proprietary knowledge of NTFS and/or with undocumented hooks to avoid problems.
True. It's also possible that Windows 7 is better with ntfs allocation than XP. All my SageTV tests were done under XP SP3, all my MC tests were under Windows 7. Not exactly a fair comparison I suppose.

Last edited by brainbone; 01-02-2010 at 07:32 PM.
Reply With Quote
  #37  
Old 01-02-2010, 07:27 PM
stanger89's Avatar
stanger89 stanger89 is offline
SageTVaholic
 
Join Date: May 2003
Location: Marion, IA
Posts: 15,188
Quote:
Originally Posted by brainbone View Post
The basic idea, as I understand it, is to buffer more data in memory, so larger chunks are handed off during each write, allowing larger chunks of the file system to be allocated on each write.
That's not exactly what it was saying, not the way I read it. The way I read it was basically to minimize fragmentation you should tell the file system how large the file is going to be beforehand, then the file system can do it's best to allocate that space in one contiguous area.

First big problem with that is Sage doesn't really know how big the file is going to be when it starts a recording.

Quote:
This wouldn't really "slow down" write performance -- just delay it.
The performance hit (I'd imagine) would at least partly be in the overhead of trying to determine how much space is needed before every write. Probably more of an issue for small file writes like the article talked about than than for massive ones like Sage does.

Quote:
The faster the buffer fills to the desired level, the sooner one would write it out to disk. While one buffer is being flushed to disk, another can begin to fill. Similar optimizations are used in IP packet scheduling to increase performance by lowering packet fragmentation, at the cost of some latency. This small amount of latency for an increase in write buffering would likely go unnoticed in SageTV.
I wouldn't be so sure about that, it's latency that kills Sage, and what larger clusters are designed to combat.

Quote:
Windows 7 ultimate supports up to 4 digital tuners. I tested with 2 digital (HD streams) and 2 analog, without issue. SageTV did have issues without 64k clusters using the same 4 tuners.
But Win 7 also has access to MS proprietary information that Sage doesn't.
Reply With Quote
  #38  
Old 01-02-2010, 07:58 PM
MattHelm MattHelm is offline
Sage Icon
 
Join Date: Jun 2005
Location: Chicago, IL
Posts: 1,209
Quote:
Originally Posted by brainbone View Post
True. It's also possible that Windows 7 is better with ntfs allocation than XP. All my SageTV tests were done under XP SP3, all my MC tests were under Windows 7. Not exactly a fair comparison I suppose.
Also, SageTV will run on 98SE with FAT. Will MC? I haven't looked at the spec for that.
__________________
Server #1= AMD A10-5800, 8G RAM, F2A85-M PRO, 12TB, HDHomerun Prime, HDHR, Colossus (Playback - HD-200)
Server #2= AMD X2 3800+, 2G RAM, M2NPV-VM, 2TB, 3x HDHR OTA (Playback - HD-200)
Reply With Quote
  #39  
Old 01-02-2010, 08:11 PM
CollinR CollinR is offline
Sage Icon
 
Join Date: Dec 2004
Location: Tulsa, OK
Posts: 1,305
It's pretty damn important, it's up there with balancing multiple drives by available space if you have them mapped individually.
Reply With Quote
  #40  
Old 01-02-2010, 10:30 PM
brainbone brainbone is offline
Sage Expert
 
Join Date: Oct 2006
Posts: 624
Quote:
Originally Posted by stanger89 View Post
That's not exactly what it was saying, not the way I read it. The way I read it was basically to minimize fragmentation you should tell the file system how large the file is going to be beforehand, then the file system can do it's best to allocate that space in one contiguous area.
From the previous document provided:
"One possible solution would be to issue a single 400kb write. This would mean the application needs to deal with data records that size or do its own buffering."

Many of the other suggestions in the document don't seem to fit SageTV well.

Quote:
Originally Posted by stanger89 View Post
First big problem with that is Sage doesn't really know how big the file is going to be when it starts a recording.
I doesn't need to. As long as you can handle some latency in your writes, you just need to try writing in larger blocks. Sending smaller blocks will result in fragmentation, unless you're lucky enough for the OS to cache it first, or have figured out a pre-allocation strategy that actually works well with NTFS.

Quote:
Originally Posted by stanger89 View Post
The performance hit (I'd imagine) would at least partly be in the overhead of trying to determine how much space is needed before every write. Probably more of an issue for small file writes like the article talked about than than for massive ones like Sage does.
I don't know how "massive" SageTV writes are. How many bytes does SageTV actually commit with each write request? I have no idea.

The NTFS driver needs to allocate the blocks it is going to write regardless. What is a lager performance hit, waiting to commit 4096KB once, or committing 4KB 1024 times? With the buffering method, your only performance hit is exactly the latency of how long it took to receive the first 4096KB (assuming you use a 4MB buffer).

Quote:
Originally Posted by stanger89 View Post
I wouldn't be so sure about that, it's latency that kills Sage, and what larger clusters are designed to combat.
No. Larger clusters decrease the penalty of fragmentation, since more data is able to be read between each potential seek. A little extra latency between writes can accomplish the same, only a client will not be able to begin reading the new file until the buffer is committed, hence the latency. The length of the latency depends on the bitrate of the source and the size of the buffer -- but 64K comes quick even in low bitrate material. You could probably increase the buffer much more without notice.

Quote:
Originally Posted by stanger89 View Post
But Win 7 also has access to MS proprietary information that Sage doesn't.
It's possible, but waiting to commit larger blocks of data generally works without needing proprietary information. You may end up with fragmentation between blocks -- just as you may end up with fragmentation between the 64k clusters.

Here is another thread about a similar problem (skip past the pre-allocation talk and get to the meat):
http://lists.samba.org/archive/rsync...er/016836.html

Last edited by brainbone; 01-02-2010 at 10:42 PM.
Reply With Quote
Reply


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -6. The time now is 01:04 AM.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, vBulletin Solutions Inc.
Copyright 2003-2005 SageTV, LLC. All rights reserved.