|
Hardware Support Discussions related to using various hardware setups with SageTV products. Anything relating to capture cards, remotes, infrared receivers/transmitters, system compatibility or other hardware related problems or suggestions should be posted here. |
|
Thread Tools | Search this Thread | Display Modes |
#1
|
|||
|
|||
achieving gigabit speeds?
Hi
I just wondered if anyone had any tips for getting most out of the gigabit network cards and gigabit switches? We had a bad lightning storm here last week, and it fried out some of my 100Mbps linksys switches, and some network cards. So I took the opportunity to purchase new 1000/100/10 Netgear switches and the GA311 1 Gbps network cards. Also purchased a few cat6 network cables to see if that increased speed. I ran some tests and it seems like I am only getting about 200Mbps transfer rates across the network. All the involved switches and network cards report they are operating at 1 Gbps setting. So I dont understand how I can be missing out on like 75% of my bandwidth. Was hoping someone has been down this road before and had a few tips or things to try. Since I was only getting about 80Mbps before the upgrade, I am not that unhappy, but then again, I SHOULD be getting a lot more throughput. I guess I was delerious for thinking I could just plugin the stuff and it work actually just work right. Sigh : |
#2
|
||||
|
||||
could it be a hard disk limitation? Do you have a raid setup you could test?
__________________
If this doesn't work right, Then: "I'm going to blow up the Earth!" |
#3
|
|||
|
|||
I think what you are seeing is pretty typical. Hardware except for some of the enterprise grade RAID stuff can't come close to Gb speeds. Where Gb really shines it multiple connections to a device. I belive you seem to have less bottle necks, plus you've said that you are seeing 120Mb more... so thats nothing to turn your nose up at.
I use a program at work called IPerf to test throughputs, and what you are seeing it pretty normal! --Mike Started thinking about it... most IDE hard drives are rated at 133Mbit/sec so that is where most of the bottleneck is happening!
__________________
Win7Pro, SageTV v6.6, SageMC, Intel E6850, 2048MB DDR2, , ATI4750, LG BR/HDVD/DVD-Rom,1xHDHR, 1xPVR-1600 (1x DirectTV , 1x Comcast Analog Cable, and 3x OTA Digital), USB-UIRT, and Harmony 300 Remote + 1 MVP Extender + 1 PC Client. |
#4
|
|||
|
|||
Your PCI bus cannot handle that level of throughput ... PCI-X and PCI-E can if properly equipped and FWIW 200mbs sounds about average for a PCI GbE setup.
|
#5
|
|||
|
|||
duhhhh I forgot about that part of the PC. So now my PC bus is going to limit me, oh what a drag. I guess the next PC I will have to build is one that uses the PCI X bus or whatever is fastest.
But you are right I am happy about the 120Mbps+ speed increase. I just expected a lot more. Shame on me for not researching a little more. Oh well, GB network stuff is dirt cheap these days, just gave me a good excuse to upgrade. I watched the CPU on the 2.4 Ghz dell server go upto like 75%-95% when it was transferring my test files of like 300MB to 500MB, and also when I used the SpeedTestClient/Server software to shovel lots of data across the network. Even if I put 2 GB network cards in the Dell file server, I guess thats not going to help, it will just compound the problem I suppose because then I am trying to shove even more data across a Bus that is already filled up/overflowing with data. So I will just have to take my 200Mbps+ and live with it. |
#6
|
||||
|
||||
What exact model gigabit switch did you get?
|
#7
|
|||
|
|||
Quote:
8 ports. |
#8
|
||||
|
||||
Quote:
Hard drive sequential write speeds can come roughly from anywhere between 30MBytes/second upto ~70Mbytes/second for more of the common drives out there. These figures can obviously drastically deviate from these numbers but this is a general assumption on my part. So using these values if we say your drive is capable of writing ~40MBytes/second sequentially, that roughly equates to roughly 325 megabits/second. So it could be a hard drive bottleneck in combination with fragmentation and PCI bus usage. Just my guess |
#9
|
|||
|
|||
Okay, just my .02 worth but my understanding....
First off, it is gigabit....which only equates out to approx 125 MBytes/S. This is important in that hard drives have a max sustained rate of about 60 MByte/S which equates out to about 480 mbits/s sustained. Sure that still isn't sustained at gigabit, but doesn't completely equate out to the only 200 Mb/s. Secondly, the PCI bus has a speed of 133 MByte/s. This equates out to approx 1064 Mbit/s. Now this theorettically could be your bottleneck, depending on what you ahve on your PCI bus, due to tuner cards and what not, but if you remember that even recording at 3.2 gigabytes/h of video, video equates out to less than 1 Mbyte/S, then that still leaves you quite a bit of bandwidth for your PCI bus. Third, going forward, most mid to upper range computers and motherboards (if building yourself) have on board gigabit ethernet ports. These generally bypass the PCI bus and are actually the perferred method over using seperate cards as they do not take away from the PCI bus. Fourth, the disadvantage to gigabit ethernet is the processor usage. The standard packet (or frame) size is only 1518 bytes or approx 12 kbit. This means that in order for 1 Gigabit/s, you have to transfer over 83000 packets per second. While each packet only requires a small fraction of your cpu, as you can imagine that that amount, it can become very taxing! You may want to do some research into Jumbo Frames. If all of your computers on your lan are using gigabit ethernet cards, then that is definately something you want to do. What it basically allows you to do is cram more regular packets (or frames) into one large frame. Instead of transferring only 1518 bytes per frame, you could do say 9108 bytes (6 standard packets per jumbo). This means that you would only need to transfer about 14000 packets per second, which greatly reduces CPU overhead. Unfortunately not all gigabyte hardware is jumbo frame compatible. I believe it is CPU limited to the speed that you can transfer more so than anything else. While very few people ever manage to actually transfer data at gigabit speeds (except between raid based servers that do nothing but file share), I do believe you probably have more head room if you tweak. I personally have a gigabit server and all the rest of my clients are 100bt and so therefore I can not use Jumbo Frames and am just happy that the backbone to my network has a lot of bandwidth so that if I have multiple clients wanting to pull HD content from my server, the bandwidth is there. Anyway, this is my research. I just noticed some inadequacies to what was being posted. Yes Raid would theorettically help and yes PCI-X or PCI-E would help to a small degree, but really a lot of it is CPU limited/packet limited. Gigabit is a whole new world of complicated compared to good old 10bt and 100bt and requires a lot of teaking to get the most out of it. You can't just slap everything together to get the most. Good hardware and a little luck are also needed.
__________________
Sage Server: AMD Athlon II 630, Asrock 785G motherboard, 3GB of RAM, 500GB OS HD in RAID 1 and 2 - 750GB Recording Drives, HDHomerun, Avermedia HD Duet & 2-HDPVRs, and 9.0TB storage in RAID 5 via Dell Perc 5i for DVD storage Source: Clear QAM and OTA for locals, 2-DishNetwork VIP211's Clients: 2 Sage HD300's, 2 Sage HD200's, 2 Sage HD100's, 1 MediaMVP, and 1 Placeshifter |
#10
|
|||
|
|||
Guess I was a little slow! I was starting my post, and lost track while watching a game!
__________________
Sage Server: AMD Athlon II 630, Asrock 785G motherboard, 3GB of RAM, 500GB OS HD in RAID 1 and 2 - 750GB Recording Drives, HDHomerun, Avermedia HD Duet & 2-HDPVRs, and 9.0TB storage in RAID 5 via Dell Perc 5i for DVD storage Source: Clear QAM and OTA for locals, 2-DishNetwork VIP211's Clients: 2 Sage HD300's, 2 Sage HD200's, 2 Sage HD100's, 1 MediaMVP, and 1 Placeshifter |
#11
|
|||
|
|||
When all else fails, check your cables. Remember that Cat5 cable doesn't support the full 1Gb bandwidth, and a bad/marginal cable will bring things down very fast.
Also make sure that both the computer and the switch are in full duplex mode. At work, when we get a complaint about slow network speeds 90% of the time the problem is that the switch or the computer is set to half-duplex mode. If they are currently set to automatic, force them both to FULL. |
#12
|
|||
|
|||
Bus contention and theoretical max numbers don't mix, neither does that include protocol overhead so those throughput discussions are ... skewed. For comparison 1000mbs that GbE is marketed as does include the ~200mbs overhead (Gbe is actually ~1200mbs per direction, double as full duplex.) 10 and 100mb Ethernet standards do not include overhead (~25%) ... so yes you can't get 100mbs out of 100mb Ethernet sorry.
The other part of the GbE equation, aside from properly spec'd equipment and PCI bus implementations, is how your hosts are configured. To achieve single host to host GbE speeds, normally you have to increase Ethernet framing to allow the frames to be packed with larger IP datagrams. Increasing the MTU to 9180 is the norm; however, that means that at Layer 2 (Ethernet) you must use Jumbo Frames. Your switch needs to support Jumbo frames and your NIC should but YMMV. |
#13
|
||||
|
||||
My 2 cents - I use GB connections between my PCs, too. As a test I had two machines about three feet apart with a GB switch in the middle. Baseline test was to do disk defrag two times, reboot, defrag again then xfer one 3.2GB file from machine A to machine B using CAT5e, 3' patch cables from the machines to the switch. Then to reboot, defrag 2 more times, reboot defrag again then use 3' CAT6 cables between the computers and switch and xfer the same 3.2GB file from machine A to machine B. The CAT6 cables resulted in a 3 second quicker xfer speed than the CAT5e. Instead of buying all new CAT6 cables and rewiring my walls I decided to stick with CAT5e. For me the costerformance gain just wasn't there.
As a side note: Machine A is using Ultra320 SCSI RAID (mode 5) (five 15K RPM drives connected into a PCI-X 64-bit slot) and Machine B is "RAID" 0 (on-board) with two Seagate SATA drives.
__________________
386DX, 40MB HDD, 5-1/4" & 3-1/2" Floppies, 14.4K baud modem, DOS 6.2 and Windows 3.1 on a Samsung 55" LCD |
#14
|
|||
|
|||
Lots of great information here. Thanks so much. I had a feeling I would have to be tweaking and testing more thorougly to get more out of what I got.
I can definately upgrade all PC's to use the GA311 netgear card. Im assuming at this point, it can support Jumbo frames. If not, Im hosed because I already bought 4 and I aint going backwards... That'll teach me for not asking first. I will have to find out if it can support Jumbo frames. So what happens if a NIC cant support Jumbo frames and all the other PC's are using Jumbo frames. I suppose its just cant talk to the other machines. BUt can still get out on the intenet, since it would be talking to the internet router...and not a PC. I did use Cat6 cables, and Cat5e, and they didnt seem to make much difference was hoping to see more of a boost from them. I used DiskWriggler from pc to pc, to test throughput, and it said like 10-11MB/sec thruput. But that was before the network upgrade. So I will be running those tests again and see how much it is impacted. Its gotta be higher now. Otherwise I guess its either the PCI bus or the hard drives that are limiting factor. All hard drives are current generation like 200GB+ 7200 RPM Ata100 I could run specific disk IO tests on the drives, and see what it reports. Then its down to the PCI bus. Dont know best way to test that part of the PC. |
#15
|
|||
|
|||
Quote:
__________________
unRaid Server:Quad-Core Xenon, 20 GB Ram, openDCT/sageTV Dockers, HDHR,HDHR Prime Network Encoder:Dell Inspiron 1000 Laptop, 512MB Ram, Windows XP, HDPVR |
#16
|
|||
|
|||
I would think that the latency in MS Windows I/O queues and scheduler would become important at these speeds, for data managed by the OS, such as disk I/O, and overhead in TCP/Socket driver.
|
#17
|
|||
|
|||
Not to mention the serialization delay for real time [small] datagrams ... such as telnet. Would probably want to look at using 802.1p to help deal with it.
|
#18
|
|||
|
|||
Quote:
As for speed testing, I have found that "nttcp" is one of the better programs out there. It will auto-generate the stream of data, so it is not a file being read from a hard drive, thus removing extra bottle-necks that might be introduced by the testing software itself. It will give you are real good performance test between points in your network (basically you need two systems, one runs as the "transmit host" the other the "recieve host", let it run the test and when it is over it will spit out a bunch of statistics, like how many bits/bytes where transfered, how long it took, average transfter speed, etc... you can then run it in reverse, in other words flip which system did the transmitting and the recieving, this will let you trance down problems with systems and tweak settings). |
#19
|
|||
|
|||
Quote:
|
#20
|
||||
|
||||
Can nttcp be used between an XP and Linux machine?
|
Currently Active Users Viewing This Thread: 1 (0 members and 1 guests) | |
|
|