Why are IPv4 addresses 32-bit?

32

7

Many moons ago, when I was just a wee bairn commencing my career, I had a job interview for a low-level developer role. Having at that time just learned how CIDR was implemented, I was keen to show off my knowledge.

Sadly, that tactic didn't work out too well for me. I recall being completely floored by the very first question that was asked (and, then ruffled, it all went downhill). The question was:

Why are IPv4 addresses 32-bit?

I readily admitted that I didn't know the answer, but I did know that the original protocol design divided the address space into an 8-bit network number and a 24-bit host identifier—so I tried to rationalise it on the grounds that the protocol designers imagined an Internet of a few networks (after all, it was originally intended to link together a specific few) each comprising many hosts and, for simplicity of programming, kept everything aligned to byte boundaries.

I recall the interviewer being unsatisfied with my answer and suggesting to me that the real reason is that it's guaranteed to fit inside a long int in C, so simplifies implementation details. Being young and green at the time, I accepted that as a reasonable answer and (before today) hadn't thought any more of it.

For some reason that conversation has just returned to me and, now that I reflect upon it, it doesn't seem entirely plausible:

  1. Under the original addressing scheme comprising fixed-size network and host fields, it's unlikely that a developer would have wanted to assign the concatenation of the two fields to a single variable (I don't have access to any early IP implementations to verify what they actually did in practice); and

  2. At the time that works on TCP/IP began, C was neither standardized nor the de facto "lingua franca" of low-level software development that it has become today.

Was the interviewer's suggestion actually founded in fact? If not, what were the real reasons that the protocol designers chose 32-bit addressing?

eggyal

Posted 2014-05-15T15:16:36.530

Reputation: 275

3Same reason why 640 kB ought to be enough for anybody. Nobody expected toasters and fridges to have internet access. – None – 2014-05-15T17:32:04.803

1@afwe: Hm. The question wasn't why didn't they choose a bigger number to begin with? aka why only 32-bits? (which is really the point that \@Jens' excellent answer addresses), but more what was so special about 32-bits (rather than, say, 16-bits or 24-bits or 48-bits)? – eggyal – 2014-05-15T18:01:53.573

@Downvoter: Care to comment? – eggyal – 2014-05-23T21:51:03.197

Answers

21

Here's a link to a Hangout with Vint Cerf (Apr. 2014) where he explains how he thought that this internet was supposed to be an experiment only:

As we were thinking about the Internet (thinking well, this is going to be some arbitrary number of networks all interconnected — we don't know how many and we don't know how they'll be connected), but national scale networks we thought "well, maybe there'll be two per country" (because it was expensive: at this point Ethernet had been invented but it wasn't proliferating everywhere, as it did do a few years later).

Then we said "how many countries are there?" (two networks per country, how many networks?) and we didn't have Google to ask, so we guessed at 128 and that would be 2 times 128 is 256 networks (that's 8 bits) and then we said "how many computers will there be on each network?" and we said "how about 16 million?" (that's another 24 bits) so we had a 32-bit address which allowed 4.3 billion terminations — which I thought in 1974/3 was enough to do the experiment!

I had already posted this as a comment to Jens Link's answer, but I felt it shoud surface a bit more.

Daniel F

Posted 2014-05-15T15:16:36.530

Reputation: 333

More than "surface a bit more", I think that this answers the actual question more directly than Jens's answer. – eggyal – 2014-05-19T08:21:10.410

33

Easy answer: because Vint Cerf decided so. He thought that he was designing an experimental protocol and considered 32-bits to be more than sufficient for that purpose; he did not expect IPv4 to be used in production systems and so no greater thought was given to the size of the address space.

At the Google IPv6 Conference 2008, he hosted a panel discussion titled What will the IPv6 Internet look like? during which he recounted:

The decision to put a 32-bit address space on there was the result of a year’s battle among a bunch of engineers who couldn’t make up their minds about 32, 128 or variable length. And after a year of fighting I said — I’m now at ARPA, I’m running the program, I’m paying for this stuff and using American tax dollars — and I wanted some progress because we didn’t know if this is going to work. So I said 32 bits, it is enough for an experiment, it is 4.3 billion terminations — even the defense department doesn’t need 4.3 billion of anything and it couldn’t afford to buy 4.3 billion edge devices to do a test anyway. So at the time I thought we were doing a experiment to prove the technology and that if it worked we’d have an opportunity to do a production version of it. Well — [laughter] — it just escaped! — it got out and people started to use it and then it became a commercial thing.

Transcript by Peter E. Murray.

Jens Link

Posted 2014-05-15T15:16:36.530

Reputation: 3 708

7

Agh, how foolish of me! Occam's razor strikes again. At least you have given me the smug satisfaction of knowing that the interviewer was wrong.

– eggyal – 2014-05-15T15:43:12.727

Well the interviewer might not believe this answer. ;-) – Jens Link – 2014-05-15T15:52:24.923

@eggyal isn't it possible for there to be multiple reasons for choosing a 32-bit ip length? – user5025 – 2014-05-15T15:54:31.403

2@user5025: Yes, it is possible (in the general case). But if Vint says those were his reasons for choosing 32-bits for IPv4, then it's hard to argue that he also had others. – eggyal – 2014-05-15T15:56:20.647

1He didn't say it was the only reason. He simply explained the reason for choosing 32 bit addresses over various other options. – user5025 – 2014-05-15T15:59:23.623

5@user5025: Okay, that's a fair point. Indeed, he mentions that the engineers were squabbling over what the length should be, with some advocating 32-bit. So I suppose the question is what were their motivations for advocating 32-bit (i.e. what made it acceptable to Vint)? – eggyal – 2014-05-15T16:15:39.067

1@eggyal: Returning to the original question, it would seem very likely that those pushing 32 bits were motivated by the fact that 16 would likely prove too limiting even for advanced testing, and 32 would be easier to work with than any other number of bits greater than 16. – supercat – 2014-05-15T20:58:02.673

@supercat: When you say "easier to work with", to what are you referring? CPU word sizes, operating systems and programming languages were all heterogenous and non-standard at the time... so in what way was 32 easier than, say, 24? – eggyal – 2014-05-15T21:59:15.500

1@eggyal: IPV4 was designed for systems with octet-addressable memory, and I would be surprised if initial development did not use an octet-based machine. All C compilers were required to have an unsigned type which could handle numbers from 0 to at least 4294967295, and I would be surprised if any compilers for octet-based machines didn't have a type which took exactly four octets to store. On an octet-based machine, if a buffer is word-aligned, and an IP address appears at a multiple-of-four offset, one may use pointer typecast and dereference to fetch the IP address into a single variable. – supercat – 2014-05-15T22:56:14.297

@eggyal: Once IP addresses are stored in variables, operators like == may be used to compare them. Use of 24-bit addresses would have required either defining a 24-bit type (which wouldn't allow direct comparisons) or assembling an int from multiple discrete bytes. – supercat – 2014-05-15T22:57:14.340

@supercat: You've made a number of assumptions there. First, that the capabilities of C compilers was in any way influential to the design of IP. Second, that it was deemed desirable to access the complete address in a single operation, despite the (then) fixed separation between network and host parts. These are the exact same points that I already made in my question. – eggyal – 2014-05-15T23:03:22.547

@supercat: Moreover, the predecessor protocols that IP superseded didn't satisfy your objectives: ARPANET IMP messages, for example, had 32-bit addresses that were spread across three non-contiguous fields in the message leader - none of which were aligned to 32-bit word boundaries. The PARC Universal Packet, another contributing influence, had only 16-bit addresses. Taken together with Vint's comments that some engineers were pushing for variable-length addressing, I don't find your suggestion that 32-bit word boundaries were "intrinsically" desirable very satisfying. – eggyal – 2014-05-15T23:04:16.367

2@eggyal: My point is not that 32 bit integers were "definitely" a motivating factor, but rather to suggest that I would consider it highly plausible that enough of the engineers suggesting that size might have considered that a factor that, absent evidence to the contrary, I don't think it could be ruled out as a factor in the eventual choice. – supercat – 2014-05-15T23:20:46.860

@supercat: Aye, I can agree with that. By all means post in an answer for an upvote! :) – eggyal – 2014-05-15T23:23:52.800

2@eggyal: You asked what might have motivated engineers to pick 32 bits. My intention was to answer that particular question. I've written a TCP/IP stack on "bare metal" and had to deal with addresses on various occasions but was never interested in parsing them--only in determining whether they matched [this particular stack only handled incoming TCP/IP connections, so it had to deal with ARP, but not gateways]. – supercat – 2014-05-15T23:32:54.733

@JensLink Here's a more current (Apr. 2014) interview expaining his decision Hangout with Vint Cerf, TWiT Live listen for about 5 minutes. Maybe it'll be useful to expand your answer.

– Daniel F – 2014-05-16T19:53:24.600

0

Word size. They were writing software, not designing computer hardware - although I'm sure they had performance and portability in mind. At that time, 32 bit was the word, the longword, or int or longInt or whatever. See Word Size Choice.

They wrote this software "during the first decades of 32-bit architectures (the 1960s to the 1980s)." -Wikipedia

Ron Royston

Posted 2014-05-15T15:16:36.530

Reputation: 3 237

3Unless you're suggesting that the architects of TCP/IP had a particular machine architecture in mind, I'm not sure where you're going with this argument... do you have any evidence that they were using/designing for 32-bit architectures, or even that the word size was a relevant consideration to the length they selected for the network address? – eggyal – 2017-01-08T18:01:39.597

@eggyal: Languages for 8-bit and 16-bit machines often included a 32-bit integer data type, but it was far less common for languages on 32-bit machines to have multi-word integer data types. At least at the source-code level, working with 32-bit values is essentially as convenient as working with 16-bit values, and is definitely more convenient than working with larger types. Further, for devices that have limited communications needs, 32-bit addressing could be just fine if they communicate through more sophisticated gateways. – supercat – 2018-10-16T17:43:34.370