Why was the MTU size for ethernet frames calculated as 1500 bytes?



Is there any specific calculation that that was done to arrive at this number, and what were the factors that were taken into consideration for that calculation.


Posted 2013-08-24T16:14:32.097

Reputation: 186

2IEEE people are resisting adding 9k to the standard because mathematical guarantees FCS brings today at 1.5k would not all be true anymore at 9k. – ytti – 2013-08-25T06:27:58.177


@ytti, that is only one of the arguments against endorsing > 1500 frames. The full text of Geoff Thomson's letter (containing the IEEE objections to standardizing jumbo frames) is in draft-ietf-isis-ext-eth-01 Appendix 1. The objections start with the word "Consideration"

– Mike Pennington – 2013-08-26T04:57:50.750

Did any answer help you? if so, you should accept the answer so that the question doesn't keep popping up forever, looking for an answer. Alternatively, you could provide and accept your own answer. – Ron Maupin – 2017-08-10T00:27:41.607



The answer is in draft-ietf-isis-ext-eth-01, Sections 3-5. Ethernet uses the same two bytes different ways in the Ethernet II (DIX) and 802.3 encapsulations:

  • Ethernet II uses the first two bytes after the Ethernet source mac-address for a Type
  • 802.3 uses those same two bytes for a Length field.

I'm including an annotated diagram below of each frame type, which shows exactly where the conflicting bytes are in the ethernet header:

  • RFC 894 (commonly known as Ethernet II frames) use these bytes for Type

       | DA | SA | Type | Data | FCS |
       DA      Destination MAC Address (6 bytes)
       SA      Source MAC Address      (6 bytes)
       Type    Protocol Type           (2 bytes: >= 0x0600 or 1536 decimal)  <---
       Data    Protocol Data           (46 - 1500 bytes)
       FCS     Frame Checksum          (4 bytes)
  • IEEE 802.3 with 802.2 LLC / SNAP (used by Spanning-Tree, ISIS) use these bytes for Length

       | DA | SA | Len  | Data | FCS |
       DA      Destination MAC Address (6 bytes)
       SA      Source MAC Address      (6 bytes)
       Len     Length of Data field    (2 bytes: <= 0x05DC or 1500 decimal)  <---
       Data    Protocol Data           (46 - 1500 bytes)
       FCS     Frame Checksum          (4 bytes)

Both Ethernet II and 802.3 encapsulations must be able to exist on the same link. If IEEE allowed Ethernet payloads to exceed 1536 bytes (0x600 hex), then it would be impossible to distinguish large 802.3 LLC or SNAP frames from Ethernet II frames; ethernet's Type values start at 0x600 hex.


I am including a link to pdf copies of the Ethernet Version 1 spec and Ethernet Version 2 spec, in case anyone is interested...

Mike Pennington

Posted 2013-08-24T16:14:32.097

Reputation: 26 089

2Well, Ethernet II frames have their type field begin at 0x0600 (from the IEEE 802.3x-1997 spec) because the max max length of 802.3 was just below that. So that's just an effect, not a cause. – nos – 2013-08-27T08:09:35.590

1@nos, to claim that this is an effect instead of a cause presupposes that you can prove the cause... can you provide authoritative evidence for your proposed cause? The original Ethernet Version 1 spec published in 1980 already uses the Type field, and in 1984, the IP protocol was specified using Ethertype 0x0800 – Mike Pennington – 2013-08-27T09:11:26.567

2Indeed, the ethernet I and II spec already had a type field (which at that time had no restrictions), and already specified the max data length of 1500 - at that time there was no 802.3 frames. So one cannot conclude that the limit of 1500 was added in a later spec because of the type field. – nos – 2013-08-27T10:45:32.447

2@nos I disagree, Ethernet II had to coexist with the preexisting standard. And it also defined the use of the same field to act as both a type field in the prior standard and a length field in the new standard. Given that there MUST be NO possibility of confusion between the two standards, that must coexist in the same network, any length that could look like an existing type would not be allowed. As existing type list started at 0x600 a number less than that had to be chosen. To allow for no further expansion to the standard there had to be some band left available should it be needed. – None – 2013-10-31T00:12:01.210

I support @nos comment: using a type at 0x600 or more is a side effect of the maximum size of 1,500 bytes and not the other way around. Could we have used packets of 9,000 bytes, the types would have started at 0x2800. The question was crystal clear: why is it that we used 1,500 bytes in the first place and user1171's answer is on point and an actual answer to the OP's question. – Alexis Wilke – 2018-06-01T06:53:00.263


At the other end of the range - 1500 bytes, there were two factors that lead to the introduction of this limit. First, if the packets are too long, they introduce extra delays to other traffic using the Ethernet cable. The other factor was a safety device built into the early shared cable transceivers. This safety device was an anti-babble system. If the device connected to a transceiver developed a fault and started transmitting continuously, then it would effectively block any other traffic from using that Ethernet cable segment. To protect from this happening, the early transceivers were designed to shut off automatically if the transmission exceeded about 1.25 milliseconds. This equates to a data content of just over 1500 bytes. However, as the transceiver used a simple analogue timer to shut off the transmission if babbling was detected, then the 1500 limit was selected as a safe approximation to the maximum data size that would not trigger the safety device.

Source: http://answers.yahoo.com/question/index?qid=20120729102755AAn89M1


Posted 2013-08-24T16:14:32.097

Reputation: 151

5Hi @user1171: StackExchange preferred style is to include the answer material here, and link out as a reference. That way, when the link eventually rots, the answer is still useful. – Craig Constantine – 2013-08-25T12:02:59.480

The jabber function required the MAU to shut down after 20 to 150 ms for 10 Mbit/s (IEEE 802.3 Clause, 40 to 75 kbit for Fast Ethernet (Clause and twice that for Gigabit Ethernet, far exceeding the frame length. The Yahoo post is wrong. – Zac67 – 2018-07-20T13:50:29.263


When Ethernet was originally developed as a shared medium or bus with 10Base5 an 10Base2, collisions of frames were frequent and expected as part of the design. Contrast this to today when most everything is switched with separate collision domains and running full-duplex where no one expects to see collisions.

The mechanism to share the "ether" employed CMSA/CD (Carrier Sense Multiple Access/Collision Detection)

Carrier Sense meant that a station wanting to transmit must listen to the wire -- sense the carrier signal -- to ensure no one else was talking since it was Multiple Access on that medium. Allowing 1500 bytes (though an arbitrary number as far as I can tell) was a compromise that meant a station could not capitalize the wire too long by talking too much at one time. The more bytes transmitted in a frame, the longer all other stations must wait until that transmission completes. In other words, shorter bursts or smaller MTU meant other stations got more opportunity to transmit and a fairer share. The slower the rate of the transmission medium (10Mb/s), stations would have longer delays to transmit as the MTU increases (if allowed to exceed 1500).

An interesting corollary question would be why the minimum frame size of 64 bytes? Frames were transmitted in "slots" that are 512 bits and took 51.2us for round-trip signal propagation in the medium. A station has to not only listen to when to start talking by sensing the IFG (interframe gap of 96 bits), but to listen for collisions with other frames. Collision Detection assumes maximum propagation delay and doubles that (to be safe) so it doesn't miss a transmission starting about the same time from the other end of the wire or a signal reflection of its own transmission when someone forgot the terminating resistor at the ends of the cable. The station must not complete the sending of its data before sensing a collision, so waiting 512 bits or 64 bytes guarantees this.


Posted 2013-08-24T16:14:32.097

Reputation: 5 144


Originally, max. payload was defined as 1500 bytes in 802.3. Ethernet v2 supports frame length of >=1536 and this is what IP implementations use. Most carrier-class vendors support around 9000 bytes ("jumbo frames") these days. Since 1500 byte is the standard that all Ethernet implementations must support, this is what is normally set as default on all interfaces.


Posted 2013-08-24T16:14:32.097


You should google maxValidFrame, it was defined by IEEE; consequently, the 9KB jumbo frame implementations that are common today are not officially compliant with Ethernet, but they work quite well for Ethernet II payloads – Mike Pennington – 2013-08-24T18:00:02.203

Strictly speaking, not 802.3-compliant. IP uses Ethernet v2 though, so I tend not to even think of 802.3... – None – 2013-08-24T18:19:36.627


Jumbos are not compliant with Ethernet II or 802.3 after 802.3x was ratified. 802.3x Clause defines maxValidFrame at 1500B payloads. Thus after 1997, any payload exceeding 1500 bytes is not compliant. See the letter that the IEEE 802.3 chairman sent to IETF regarding this issue. In short, 802.3 is much more than a frame standard... it defines both framing and hardware requirements. This means the hardware implementations depend on compliance with the frame format. Half Duplex w/ CSMA-CD needs <= 1500B payloads.

– Mike Pennington – 2013-08-24T18:42:36.263


The minimum ethernet frame is based on the Ethernet Slot Time, which is 512 bit lengths (64 Bytes) for 10M ethernet. After subtracting 18 bytes for the ethernet header and CRC, you get 46 bytes of payload.

Ethernet Slot Time was specified so CSMA/CD would correctly function. One must be sure that the minimum frame size does not exceed the longest possible length of cable; if it did deterministic collision detection would be impossible. After collision detection at the maximum length of cable, you need the collision detection signal to return to the sender.

Pooja Khatri

Posted 2013-08-24T16:14:32.097

Reputation: 101

3I'm having trouble understanding how the mechanism for determining the minimum ethernet frame size has anything to do with the current 1500 byte maximum defacto standard. Please elaborate! – Stuggi – 2016-10-23T08:43:28.007

2@Stuggi It doesn't. – Ken Sharp – 2017-07-17T02:21:59.873