SageTV Community  

Go Back   SageTV Community > Hardware Support > Hardware Support
Forum Rules FAQs Community Downloads Today's Posts Search

Notices

Hardware Support Discussions related to using various hardware setups with SageTV products. Anything relating to capture cards, remotes, infrared receivers/transmitters, system compatibility or other hardware related problems or suggestions should be posted here.

Reply
 
Thread Tools Search this Thread Display Modes
  #41  
Old 01-02-2010, 10:43 PM
stanger89's Avatar
stanger89 stanger89 is offline
SageTVaholic
 
Join Date: May 2003
Location: Marion, IA
Posts: 15,188
Quote:
Originally Posted by brainbone View Post
From the previous document provided:
"One possible solution would be to issue a single 400kb write. This would mean the application needs to deal with data records that size or do its own buffering."

Many of the other suggestions in the document don't seem to fit well.
It also stated that while you could do that, it wouldn't prevent fragmenting which was shown by the results of the test where IIRC the resulting file had 10 fragments just like the original test.

And further, you really can't buffer and then write a potentially 20GB file.
Reply With Quote
  #42  
Old 01-03-2010, 02:43 AM
GKusnick's Avatar
GKusnick GKusnick is offline
SageTVaholic
 
Join Date: Dec 2005
Posts: 5,083
Quote:
Originally Posted by stanger89 View Post
That's not exactly what it was saying, not the way I read it. The way I read it was basically to minimize fragmentation you should tell the file system how large the file is going to be beforehand, then the file system can do it's best to allocate that space in one contiguous area.
That's how I read it too. The article is not about buffering larger chunks in memory; it's about pre-allocating the file on disk, either all at once before writing anything, or in 64MB chunks as the file grows. This may be a "best practice" for non-real-time apps where read performance is paramount and write performance doesn't matter much, but I still claim it's a "worst practice" for Sage because of Sage's particular need to capture multiple real-time data streams as efficiently as possible, with a minimum of disk seeking.

Quote:
Originally Posted by brainbone View Post
How many bytes does SageTV actually commit with each write request? I have no idea.
Looks like 64K according to Performance Monitor. But then my disk is formatted with 64K clusters, so possibly Sage is committing smaller chunks and the OS is aggregating them to fill whole clusters. Procmon should be able to tell the difference, but that's a more involved experiment than I care to undertake at bedtime.

So if the claim is that Sage could be smarter about buffering larger chunks to reduce fragmentation, then fine, I won't argue with that. My guess is that they're currently building the capture graph using a stock file writer filter that doesn't allow for such fine tuning. So fixing it is probably not just a matter of tweaking a buffer size parameter, but may require rolling their own file writer filter to expose such a parameter.
__________________
-- Greg
Reply With Quote
  #43  
Old 01-03-2010, 09:58 AM
brainbone brainbone is offline
Sage Expert
 
Join Date: Oct 2006
Posts: 624
Quote:
Originally Posted by stanger89 View Post
It also stated that while you could do that, it wouldn't prevent fragmenting which was shown by the results of the test where IIRC the resulting file had 10 fragments just like the original test.
No. They state that writing a 400k file in multiple 1k blocks results in fragmentation.

Committing in large blocks is a well known way of keeping NTFS from fragmenting the block you are committing, provided there are enough contiguous blocks available.

Quote:
Originally Posted by stanger89 View Post
And further, you really can't buffer and then write a potentially 20GB file.
You don't need to buffer a 20GB file, you just need to buffer enough to offset the performance hit of seeking. Since 64k clusters help, something as small as a 64k buffer would probably work.

Quote:
Originally Posted by GKusnick View Post
So if the claim is that Sage could be smarter about buffering larger chunks to reduce fragmentation, then fine, I won't argue with that. My guess is that they're currently building the capture graph using a stock file writer filter that doesn't allow for such fine tuning. So fixing it is probably not just a matter of tweaking a buffer size parameter, but may require rolling their own file writer filter to expose such a parameter.
Fixing it would likely also fix issues like the loss of video between contiguous shows (IE: live TV).

My point, and I think it is spelled out quite well, is that it is a weakness of SageTV. Relying on 64k block size to fix it, rather than fixing the underlying weakness, is a bit hack-ish to me.
Reply With Quote
  #44  
Old 01-03-2010, 10:14 AM
stanger89's Avatar
stanger89 stanger89 is offline
SageTVaholic
 
Join Date: May 2003
Location: Marion, IA
Posts: 15,188
Quote:
Originally Posted by brainbone View Post
No. They state that writing a 400k file in multiple 1k blocks results in fragmentation.

Committing in large blocks is a well known way of keeping NTFS from fragmenting the block you are committing, provided there are enough contiguous blocks available.
You're right I was confusing a side note from program 3 with program 2.

Quote:
Fixing it would likely also fix issues like the loss of video between contiguous shows (IE: live TV).
Not really, the loss of video is due to tearing down the capture graph (sink filter and all), retuning and building a new one for the next show.
Reply With Quote
  #45  
Old 01-03-2010, 11:56 AM
brainbone brainbone is offline
Sage Expert
 
Join Date: Oct 2006
Posts: 624
Quote:
Originally Posted by stanger89 View Post
Not really, the loss of video is due to tearing down the capture graph (sink filter and all), retuning and building a new one for the next show.
Yes. I meant (and missed) to imply that if buffering to memory first, you open up more possibilities for handling this without rebuilding the graph -- though this could admittedly get very complicated very quickly (and rapidly moved away from the topic at hand, so my apologizes for the hijack -- but, taking a wild guess, I have to think there is an out-of-the-box solution that already exists for this very purpose).

Please don't get me wrong. I take a critical view on SageTV's short-comings not because I don't like the product, but because I do like it, and would like to see it improve. My view is that excusing faults, however slight, does not help anything.
Reply With Quote
  #46  
Old 01-06-2010, 11:15 PM
blueroom blueroom is offline
Sage User
 
Join Date: Apr 2009
Location: Toronto, Canada
Posts: 74
ExFAT supports monstrous 32M clusters, I would guess they'd be pretty tough to fragment. I only record OTA HD so the files are always several gigabytes.
__________________
ASUS M3A78-T, AMD5050E, 2G DDR2, Radeon 4550 HDMI fanless, HVR-2250, HVR-1600, AppleTV, MCE Remote
Reply With Quote
  #47  
Old 01-07-2010, 10:08 AM
sandor's Avatar
sandor sandor is offline
Sage Expert
 
Join Date: Dec 2006
Location: Philadelphia, PA USA
Posts: 621
a little off topic from where the thread is going, but does anyone know how to format a drive w/ 64k blocks on Macintosh OS 10.5+? (drives are formatted HFS+ w/ journaling)

With the built in software, it is only possible to set the block size when creating a RAID array. I haven't noticed any issues at all with recordings (i have 4 ATSC/QAM tuners all recording to the same 2 TB drive) but i have a few hours free time, and wanted to dick around with Sage a bit
__________________
MacBook Core2Duo 2 ghz
nVidia 9400M GPU
46" Sammy HLP4663 720p DLP
2x HDHR, all OTA
QNAP TS-809:
12.5 TB for Recordings/Imports/TimeMachine/Music
HD200 via 802.11n in Living Room
802.11n client in bedroom
Reply With Quote
  #48  
Old 01-07-2010, 07:02 PM
brainbone brainbone is offline
Sage Expert
 
Join Date: Oct 2006
Posts: 624
OSX may be better about block allocation in general, so I wouldn't worry about it, unless you start noticing severe fragmentation and problems recording/playing back video.
Reply With Quote
  #49  
Old 01-08-2010, 06:54 PM
sandor's Avatar
sandor sandor is offline
Sage Expert
 
Join Date: Dec 2006
Location: Philadelphia, PA USA
Posts: 621
Quote:
Originally Posted by brainbone View Post
OSX may be better about block allocation in general, so I wouldn't worry about it, unless you start noticing severe fragmentation and problems recording/playing back video.
cool, everything i have read about HFS+ w/ journaling points towards that, but i was just wondering
__________________
MacBook Core2Duo 2 ghz
nVidia 9400M GPU
46" Sammy HLP4663 720p DLP
2x HDHR, all OTA
QNAP TS-809:
12.5 TB for Recordings/Imports/TimeMachine/Music
HD200 via 802.11n in Living Room
802.11n client in bedroom
Reply With Quote
  #50  
Old 01-09-2010, 07:08 AM
martincmartin martincmartin is offline
Sage User
 
Join Date: Dec 2009
Location: Boston, MA, USA
Posts: 15
I did a little experiment the other day: I copied my Program Files folder to a partition with 64k block size. It was 10.5 GB of data, and took 12.5 GB of disk space. So, for regular files, it has about a 20% overhead (at least for these files, for me). It would probably be linear, so 32k block size would have 10% overhead, 16k would have 5%, and so on.
Reply With Quote
Reply


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -6. The time now is 01:04 AM.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, vBulletin Solutions Inc.
Copyright 2003-2005 SageTV, LLC. All rights reserved.