Why do we need IPv6?



This will be a kind of newbie question but I am not quite sure why we really need IPv6. AFAIK, the story is as follows:

In the olden days, when computers were not plentiful, 32 bit IP addresses were enough for everybody. At these times, the subnet mask was implicit. Then the number of computers have increased and 32 bits started to become insufficient.

So the subnet mask started to become explicit. Essentially the size of an IP address has increased.

My question is, what is the downside of continuing the addressing with the subnet masks? For example when they become insufficient as well, can't we continue with using "subnet-subnet masks" etc.?

I understand that it consumes more space than the original IPv4 (and maybe not much different than using IPv6) but aren't explicit subnet masks a sufficient solution? If not, why are they an insufficient solution?


Posted 2015-11-06T12:27:06.270

Reputation: 388

13Warning: it seems the term 'subnet mask' is used in the wrong way here. A subnet mask is i.e. etc. What is talked about here is something else: masquerading, better known as NAT (Network Address Translation). – Sander Steffann – 2015-11-06T14:35:57.703

@SanderSteffann Actually yes. I realized later that I didn't use the correct terminology. Please feel free to edit the question. I am not completely sure which terms are correct to use. (Especially the "subnet-subnet mask" part) – Utku – 2015-11-06T14:38:27.787

It was a bit much so I put it in an answer :) – Sander Steffann – 2015-11-06T15:17:45.257

Nobody mentions how much easier IPv6 is than networking. – Jacob Evans – 2015-11-07T03:16:12.187

IPv6 is needed for the same reason as 64 bit operating systems. To overcome a limitation. – Thorbjørn Ravn Andersen – 2015-11-07T09:55:05.080

1One of the problems with any questions about IPv6 is that you will find a lot of quasi-religous zealotry. I usually answer IPv6 questions only with comments, to keep the zealots from harming my reputation score. Truth is, IPv6 may catch on, or it may not. It has too many shortcomings to make it a sure bet, and there are other options out there. – Kevin Keane – 2015-11-08T04:23:30.197

@Utku: I see you haven't accepted any of the answers yet. Is there something more you want to know? – Sander Steffann – 2015-11-08T16:39:17.917

@SanderSteffann Opps. Sorry forgot that. – Utku – 2015-11-08T16:40:33.623

@KevinKeane: zealotry is unfortunately visible sometimes, and it hurts more than it helps :( I'm curious about what you see al other options. Care to take this to chat? http://chat.stackexchange.com/rooms/31266/discussion-on-why-do-we-need-ipv6

– Sander Steffann – 2015-11-08T16:51:50.943

Sure, we can chat if you happen to be online. The other option I see is limping along on IPv4 with band-aids such as CG-NAT for many more years. May be less technically elegant, but this is more a business than a technical decision. Those band-aids are going to be needed anyway for decades to come, until the whole world has transitioned to IPv6, so many businesses may question whether investing in IPv6 on top of that even makes sense. – Kevin Keane – 2015-11-08T19:21:17.363



Two things are getting confused here:

  • classful addressing vs CIDR
  • Masquerading / NAT

Going from classful addressing to Classless Inter Domain Routing (CIDR) was an improvement that made the address distribution to ISPs and organisations more efficient, thereby also increasing the lifetime of IPv4. In classful addressing an organisation would get one of these:

  • a class A network (a /8 in CIDR terms, with netmask
  • a class B network (a /16 in CIDR terms, with netmask
  • a class C network (a /24 in CIDR terms, with netmask

All of these classes were allocated from fixed ranges. Class A contained all addresses where the first digit was between 1 and 126, class B was from 128 to 191 and class C from 192 to 223. Routing between organisations had all of this hard-coded into the protocols.

In the classful days when an organisation would need e.g. 4000 addresses there were two options: give them 16 class C blocks (16 x 256 = 4096 addresses) or give them one class B block (65536 addresses). Because of the sizes being hard-coded the 16 separate class C blocks would all have to be routed separately. So many got a class B block, containing many more addresses than they actually needed. Many large organisations would get a class A block (16,777,216 addresses) even when only a few hundred thousand were needed. This wasted a lot of addresses.

CIDR removed these limitations. Classes A, B and C don't exist anymore (since ±1993) and routing between organisations can happen on any prefix length (although something smaller than a /24 is usually not accepted to prevent lots of tiny blocks increasing the size of routing tables). So since then it was possible to route blocks of different sizes, and allocate them from any of the previously-classes-A-B-C parts of the address space. An organisation needing 4000 addresses could get a /20, which is 4096 addresses.

Subnetting means dividing your allocated address block into smaller blocks. Smaller blocks can then be configured on physical networks etc. It doesn't magically create more addresses. It only means that you divide your allocation according to how you want to use it.

What did create more addresses was Masquerading, better known as NAT (Network Address Translation). With NAT one device with a single public address provides connectivity for a whole network with private (internal) addresses behind it. Every device on the local network thinks it is connected to the internet, even when it isn't really. The NAT router will look at outbound traffic and replace the private address of the local device with its own public address, pretending to be the source of the packet (which is why it was also known as masquerading). It remembers which translations it has made so that for any replies coming back it can put back the original private address of the local device. This is generally considered a hack, but it worked and it allowed many devices to send traffic to the internet while using less public addresses. This extended the lifetime of IPv4 immensely.

It is possible to have multiple NAT devices behind each other. This is done for example by ISPs that don't have enough public IPv4 addresses. The ISP has some huge NAT routers that have a handful of public IPv4 addresses. The customers are then connected using a special range of IPv4 addresses (, although sometimes they also use normal private addresses) as their external address. The customers then again have NAT router that uses that single address they get on the external side and performs NAT to connect a whole internal network which uses normal private addresses.

There are a few downsides to having NAT routers though:

  • incoming connections: devices behind a NAT router can only make outbound connections as they don't have their own 'real' address to accept incoming connections on
  • port forwarding: this is usually made less of a problem by port forwarding, where the NAT routed dedicates some UDP and/or TCP ports on its public address to an internal device. The NAT router can then forward incoming traffic on those ports to that internal device. This needs the user to configure those forwardings on the NAT router
  • carrier grade NAT: is where the ISP performs NAT. Yyou won't be able to configure any port forwarding, so accepting any incoming connections becomes (bit torrent, having your own VPN/web/mail/etc server) impossible
  • fate sharing: the outside world only sees a single device: that NAT router. Therefore all devices behind the NAT router share its fate. If one device behind the NAT router misbehaves it's the address of the NAT router that ends up on a blacklist, thereby blocking every other internal device as well
  • redundancy: a NAT router must remember which internal devices are communicating through it so that it can send the replies to the right device. Therefore all traffic of a set of users must go through a single NAT router. Normal routers don't have to remember anything, and so it's easy to build redundant routes. With NAT it's not.
  • single point of failure: when a NAT router fails it forgets all existing communications, so all existing connections through it will be broken
  • big central NAT routers are expensive

As you can see both CIDR and NAT have extended the lifetime of IPv4 for many many years. But CIDR can't create more addresses, only allocate the existing ones more efficiently. And NAT does work, but only for outbound traffic and with higher performance and stability risks, and less functionality compared to having public addresses.

Which is why IPv6 was invented: Lots of addresses and public addresses for every device. So your device (or the firewall in front of it) can decide for itself which inbound connections it wants to accept. If you want to run your own mail server that is possible, and if you don't want anybody from the outside connecting to you: that's possible too :) IPv6 gives you the options back that you used to have before NAT was introduced, and you are free to use them if you want to.

Sander Steffann

Posted 2015-11-06T12:27:06.270

Reputation: 5 930

1Wow very through answer. Thanks. Regarding the carrier grade NAT: You stated that bit torrent would end. But I couldn't quite understand why it would happen. More precisely, I think that it should have ended even today if that's the case. Let me explain: I guess that many home users use a NAT router and this makes me think that a "leecher" cannot leech from a user who uses a NAT router, since the leecher won't know the address of the computer to connect. Since the leecher wouln't be able to find a seeder, this would mean the end if bit torrent even today. Could you clarify this for me? – Utku – 2015-11-06T16:04:43.403

5Port forwardings can be configured on home routers by the user to allow incoming connections, or the local BitTorrent client uses a special protocol to make the NAT router install port forwardings automatically. A carrier grade NAT router won't allow such port forwardings. BitTorrent still works without incoming connections, but not nearly as good. – Sander Steffann – 2015-11-06T16:07:41.917

Ah that's what I thought as well. Thanks again. By the way, how does bit torrent work without incoming connections? – Utku – 2015-11-06T16:10:37.710

4@Utku , the glib answer is "it doesn't". that is, you are correct that incoming connections to many NAT'd bittorrent nodes cannot be established. that said, that node can establish connections to other nodes in the network and, since the data flows both directions over a connection, they can still contribute to the network by propagating chunks that one of their peers has to others. – Rob Starling – 2015-11-06T17:03:14.923


On bittorrent & NAT: see http://superuser.com/questions/104462/how-does-bittorrent-work-with-only-outbound-connections. Summary: incoming connections piggyback on your outgoing connection; some clients use a relaying system to allow incoming connections from a new user across the connections with a shared peer. This is less efficient, and you will get lower speeds. It is impossible if all peers are behind a NAT without port forwarding.

– Timbo – 2015-11-06T19:44:40.553

@Timbo So, is it as simple as: "The leechers just go and actively seek the data from non NAT'ed (or NAT'ed with port forwarding) peers? Or am I missing some things here? – Utku – 2015-11-06T20:26:20.450

@Utku: If you're not uploading, your download will be very slow. My understanding is that there is a loose trust system built into the network, and you are less likely to receive chunks if you are not sending chunks. It's "loose" largely because of the bootstrapping problem where you have nothing to upload yet. Leecher is an orthogonal question. Generally leecher is applied to folks who upload long enough to get the full item, then stop. – Timbo – 2015-11-06T22:10:47.110


on Fate Sharing, a relevant anecdote: http://techcrunch.com/2007/01/01/wikipedia-bans-qatar/

– njzk2 – 2015-11-07T22:53:28.003

I think the inability to establish incoming connections would be considered a feature by most ISPs. – Loren Pechtel – 2015-11-08T03:25:39.390

1@SanderSteffann that's incorrect. PCP (Port control Protocol) allows forwarding of UPNP messages to the CGN router. The UPNP server on the user's router needs to implement the IGD2 messages however, as the original UPNP specification only allowed requests for specific ports. The AddAnyPortMapping allows the UPNP client to request any free port. – Arran Cudbard-Bell – 2015-11-08T16:43:38.130

@ArranCudbard-Bell: I know it's possible with PCP, but I haven't seen any ISPs allow that on their networks. So as far as I know it's not really useful for end-users. Do you know of any ISPs that have deployed that? – Sander Steffann – 2015-11-08T16:45:53.013



No. We got as far as submitting patches to miniupnpd. The major issue was were support for IGD2 is needed both in the application, and on the UPNP daemon running on the CPE.

With tens or hundreds of thousands of UPNP enabled applications needing to be updated it was deemed not to be worth the effort to push the CPE manufacturers to provide updated firmware supporting IGD2.

– Arran Cudbard-Bell – 2015-11-08T20:10:24.987

I'll still continue pushing it for customers who are starting to look at migration paths. It would get far greater traction if the console manufacturers added support. xbox live was one of the biggest sources of complaints. – Arran Cudbard-Bell – 2015-11-08T20:16:26.887

Calling large scale NAT "carrier grade" when one of its major effects is to reduce the reliability of IPv4 connections is... – Michael Hampton – 2015-11-09T00:31:33.697

@MichaelHampton: yes, the irony... – Sander Steffann – 2015-11-09T01:29:17.080

A great answer. I'd also add that while some protocols handle NATting gracefully (e.g. HTTP, which is what the whole thing was built on), others are more limited or downright impossible (e.g. HTTPS). As more and more web servers switch to HTTPS (as well as WebSockets and other "modern" updates), each server needs one or more public IP addresses. It's not unsolvable, but the solutions are going to be trade-offs (like needing a new level of trust between web hosting providers and their users). – Luaan – 2015-11-09T12:41:47.657

@Luaan: It seems like you are confusing NAT with virtual-hosting. HTTPS needed a separate public address in the past, but https://tools.ietf.org/html/rfc3546#section-3.1 introduced SNI in 2003. The main browser that doesn't support that is Internet Explorer on Windows XP, and there are plenty more reasons that people shouldn't use that one anymore.

– Sander Steffann – 2015-11-09T12:47:56.067

1@SanderSteffann Some of the parts of internet are dreadfully outdated - both on client side and server side. I work with servers that still don't support SNI regularly, and the same is with the undying Windows XP. Even with american customers, we still have to reluctantly support Windows XP - try telling them that they can't access your website because their system is outdated :) And the only alternative is to have the HTTPS translation handled on the outside-facing endpoints, which has its own issues. Lots of things would be easy if people updated regularly. – Luaan – 2015-11-09T12:57:57.697


The Internet Protocol (IP) was designed to provide end-to-end connectivity.

The 32 bits of an IPv4 address only allow for about 4.3 billion unique addresses. Then you must subtract a bunch of addresses for things like multicast, and there is a lot of math showing that you can never use the full capacity of a subnet, so there are a lot of wasted addresses.

There are about twice as many humans as there are usable IPv4 addresses, and many of those humans consume multiple IP addresses. This doesn't even touch on the business needs for IP addresses.

Using NAT to satisfy the IP address hunger breaks the IP end-to-end connection paradigm. It becomes difficult to expose enough public IP addresses. Think for a minute what you, as a home user with only one public IP address, would do if you want to allow multiple devices using the same transport protocol and port, say two web servers, which by convention use TCP port 80, to be accessed from the public Internet. You can port forward TCP port 80 on your public IP address to one private IP address, but what about the other web server? This scenario will require you to jump through some hoops which a typical home user isn't equipped to handle. Now, think about the Internet of Things (IoT) where you may have hundreds, or thousands, of devices (light bulbs, thermostats, thermometers, rain gauges and sprinkler systems, alarm sensors, appliances, garage door openers, entertainment systems, pet collars, and who knows what all else), some, or all, of which want to use the same specific transport protocols and ports. Now, think about businesses with IP address needs to provide their customers, vendors, and partners with connectivity.

IP was designed for end-to-end connectivity so, no matter how many different hosts use the same transport protocol and port, they are uniquely identified by their IP address. NAT breaks this, and it limits IP in ways it was never intended to be limited. NAT was simply created as a way to extend the life of IPv4 until the next IP version (IPv6) could be adopted.

IPv6 provides enough public addresses to restore the original IP paradigm. IPv6 currently has 1/8 of the IPv6 addresses in the entire IPv6 address block set aside for globally routable IPv6 addresses. Assuming there are 17 billion people on earth in the year 2100 (not unrealistic), the current global IPv6 address range (1/8 of the IPv6 address block) provides over 2000 /48 networks for each and every one of those 17 billion people. Each /48 network is 65,536 /64 subnets with 18,446,744,073,709,551,616 addresses per subnet.

Ron Maupin

Posted 2015-11-06T12:27:06.270

Reputation: 60 371

So NAT is essentially a "patch" right? A patch that violates an essential principle of the internet. – Utku – 2015-11-06T15:43:59.933

7NAT can be called a patch, but many have called it a hack, or worse. – Ron Maupin – 2015-11-06T15:46:33.807

7Your second sentence is important! NAT creates an asymmetry between people who can run servers and people who can't (easily). That's a fundamental breach of the core democratic principles of the Internet. Whether or not someone cares about that, is a different question, of course. Most people who sit behind a NAT don't care. Many content providers do care to put as many people as possible behind a NAT, because then they can control what (the majority of) the Internet sees. – Jörg W Mittag – 2015-11-06T17:08:45.720

Good answer. Might I suggest expanding IoT to Internet of Things? Some might be unfamiliar with the abbreviation. Maybe even a link to the Wikepedia article.

– Eddie – 2015-11-06T17:12:41.933

@Eddie, I edited it per your suggestion. I sincerely hope that the network professionals using this site are familiar with IoT, but that may not be the case. – Ron Maupin – 2015-11-06T17:36:13.203

2The network professionals might. But everyone who might come across this from a Google search might not. Besides, its better writing etiquette to only use an acronym after it has been fully written out once. – Eddie – 2015-11-06T18:09:51.517

@JörgWMittag "Many content providers do care to put as many people as possible behind a NAT". And that's because they want to kill the competition right? The less providers, the higher chance that they will get the clicks, which means they will make more money through ads, etc. Is that the reason or did I screw up? – Utku – 2015-11-06T20:19:56.520

1@JörgWMittag E2E had nothing to do with the "core democratic principles of the Internet" - it existed long before the Internet became a global network "for the people". – Alnitak – 2015-11-06T22:50:12.303

@Eddie, there are so many, many acronyms used in networking, there is no way I am going to spell out the meaning every one in answers: BGP OSPF, RIP, LAN, VLAN, QoS, AAA, ISR, ASR, ASA, IP, TCP, UDP, ARP, SNMP, STP, HSRP, VRRP, GLBP, DHCP, DSCP, MAC, IEEE, ACL, MPLS, PING, FTP, DNS, VoIP, PoE, EIGRP, WAN, ISP, MTU, OSI, PPP, PPPoE, AP, WLC, SFP, CLI, ADSL, HTTP, SSL, SDN, PBR, VRF, VPN... I think you get the idea. People who use this site are expected to know a lot of acronyms without being told what the actual words are. – Ron Maupin – 2015-11-06T23:30:11.740

1@JörgWMittag,"Most people who sit behind a NAT don't care." Until their shiny new multiplayer game, application or toy doesn't work like they expect it to, then they certainly care. "Many content providers do care to put as many people as possible behind a NAT, because then they can control what...the Internet sees." It doesn't take NAT to control access. It can be done just as easily (if not more so) without NAT. NAT makes many things more difficult for content/service providers and of the people I know who are running such networks, I don't know one who uses NAT if they can avoid it. – YLearn – 2015-11-08T04:41:14.503

2@RonMaupin - I registered to add this comment. I am (I think) competent IT professional with many years of experience in IT (web app development and testing, SQL), but not expert in networking. I do recognize many abbreviations you noted, but not all. And I do consider professional courtesy to explain any such abbreviations in my answers. To help newcomers, as elders helped me many times. To make 'net a better place for learning. Just an opinion to consider. – P.M – 2015-11-09T20:14:38.877


Simply put, there are no more IPv4 address available. All (or nearly all) the available IPv4 addresses have been allocated. The explosion of IP devices, laptops, phones, tablets, cameras, security devices, etc, etc, have used up all the address space.

Ron Trunk

Posted 2015-11-06T12:27:06.270

Reputation: 33 360

1Thats not entirely true, the vast majority of the space is wasted because it was not subnetted well to start with. Now orgs have swaths of addresses they are not using as public addresses but to give them back would require considerable effort in restructuring their networks. – JamesRyan – 2015-11-06T15:26:42.853

7Yes, a lot of space is wasted. But the fact remains that the available space is exhausted. – Ron Trunk – 2015-11-06T15:32:13.567

1@JamesRyan There is also the entire "Class E" range that could (at any time) be opened up for general unicast assignment. That would give the world 16 more /8's (approx 134 million more addresses). But then what? All it would do is postpone the "final depletion" of all addresses. So regardless of how many IPv4 addresses that get reclaimed, or reallocated, the depletion is inevitable. IPv6 is the permanent solution. – Eddie – 2015-11-06T17:04:26.927

3@Eddie, in theory, the "Class E" range could be opened up. In practice, 34 years of people assuming the range is "reserved, not in use" means that anyone getting one of those addresses will have limited connectivity. – Mark – 2015-11-06T19:13:37.250

1@Mark Agreed. My point was simply that there are pockets of IPv4 space we could try to use to extend its lifetime, but why bother, IPv6 is inevitable. (I definitely wasn't saying we should extend IPv4's lifetime). – Eddie – 2015-11-06T20:05:57.317

@RonTrunk the explosion of devices like laptops, tablets are but all on the inside which would be on pvt addresses then and natted. – allwynmasc – 2015-11-07T07:46:16.570

This answer doesn't really address the question IMO. I think the OP understands that IPv4 addresses are running out. He wonders why, even so, we can't simply use other methods (e.g. NAT) to extend the way we use the existing number of addresses, albeit with some misunderstandings over how such things work. – JBentley – 2015-11-09T15:32:30.240


First of all the variable subnet mask technique did become insufficient. That is why people invented the Network address translation technique where you can use public IP to mask multiple private IP's. Even with this technique, we are almost out of IP's to allocate. Also NAT breaks one of the founding principles of the Internet: the end to end principle.

So the main reason for using IPv6 is that everyone will have available as many public IP's as they need and all the complexity of using NAT will disappear.

IPv6 also provides other functionality that I will not go into detail:mandatory security at the IP level, enables Stateless address auto configuration, no more broadcasting only multicasting and provides a more efficient processing by routers by simplifying the header. Also in this age of mobile devices it has explicit support for mobility in the form of mobile IPv6.

Regarding your proposal of using subnet/subnet masks:it does not sound feasible since its implementation would break all existing applications and it is not really elegant. If you have to change things why not go for something new and well thought.


Posted 2015-11-06T12:27:06.270

Reputation: 206

NAT wasn't invented because of a lack of addresses or lack of variable length subnets. It became popular simply because many ISPs would charge more for "business grade" services with allocated IP space. – Alnitak – 2015-11-06T22:53:33.627


The major organization that distributes IP's to the regional orgs is completely exhausted. ARIN - the regional org in the US has been exhausted for the past few months. The only regional org that still has some IP's left is AfriNIC.

There are a lot of companies/orgs, like Ford, MIT, etc that have full Class A IP ranges. Back when they acquired them, no one thought we would run out so quick.

At this time, to buy IP's, you either wait for a company to go out of business and buy it on the gray market, or you try to buy unused IPs from another company.

IP's designed for a region, cannot be used in another region. Well they can, but it is highly discouraged (geo-IP).

At this time, a lot of companies are getting ready for IPv6. The switch isn't easy as its very expensive to buy new equipment that supports full IPv6 for those who have 10s of thousands of servers.


Posted 2015-11-06T12:27:06.270

Reputation: 11

2IPs are not actually "designed for a region" - they were arbitrarily assigned to one of the 5 RIRs (which roughly correspond to the five continents). It is actually quite common that blocks of IPs are transferred (usually, sold) from one RIR that still has some left (today, only Africa has any left) to another. GeoIP is just a hack, not something designed into the IP protocol. – Kevin Keane – 2015-11-08T04:20:23.343