Pick Your Poison
Sending larger frames over the network is not always better. There are pros and cons, things to consider. Reasons why 1500 was determined to be a good general size for a general network.
Reduce error retransmissions
Every time your streaming PC receives a network frame. The network card (or PC) will check the integrity of the received data. If error correction is enabled your NAS will need to re-transmit the same data again if there is an error. On a connection that is slow, limited bandwidth or highly error prone, a large MTU on an unreliable network is undesirable as there will be more re-transmissions.
This is the reason why your Internet connection (WAN) often have a MTU that’s lower than 1500 bytes.
The network interface will signal to the CPU there is data ready to sent or receive. This is a very expensive operation as the network is significantly slower than the CPU. Adjusting the MTU to correct size will reduce the number of network ⇌ CPU interactions.
Set the MTU too low will increase the number of packets on the network. The CPU will also be more busy processing as there will be more packets sent over the network. The efficiency drops as each packet contains smaller chunks of data.
Set the MTU too high will increase the efficiency of the CPU. There may well be less packets “flying” across the network, but each packet are now much bigger in size. This may result in network congestion. Additionally, with error correction on, if a frame has error, the entire frame will have to be retransmitted, adding unnecessary load to the network.
The effectiveness of the checksum decreases as the payload gets bigger. A checksum is like a short summary of a bigger text. In this case we are trying to summarise the standard 1480 bytes into 4 bytes. We do not know what the original 1480 bytes are based on the 4 bytes, but we know the same 1480 bytes will always result in the same 4 byte checksum.
As you increase the payload, the effectiveness of the checksum decreases as you increases the likelihood that different blocks of data will summarise to the same 4 blocks. In other words, the following scenario becomes more likely:
- NAS sends a frame of 9000 bytes
- The NAS network calculates the checksum and append it to the frame
- Data is sent over the network. And there is an error in the transmission and some bytes were not received correctly by the streaming PC
- The streaming PC looks at the incorrect block of received 9000 bytes. It calculates the checksum and validate it against the checksum sent in the frame
- The checksum matches (incorrectly) but the streaming PC has no way of knowing that
- Thus the bad data is sent further up the food chain.
The size of 1500 was construed to balance the the needs and consideration of all the factors above. i.e. 1500 is thought to be the sweet spot for everything to flow smoothly.
Comment from: Did Visitor
"In my next article, I will setup the minimum equipment you’ll need for a Jumbo Frame enabled network. The article will also detail the steps you need to take to enable Jumbo Frames, and some basic diagnostics to ensure everything is working."
Yes! I wait this article with an high interest, we are at the beginning of local network optimization. Do you have a look to http://fasterdata.es.net/network-tuning/udp-tuning/?