Home > Reviews > The Internet Protocol: Past, Present, and Future – Part 1

The Internet Protocol: Past, Present, and Future – Part 1

We're starting a three-part series on the Internet Protocol this week, in honor of the successful launch and growing adoption of IPv6. In today's part, we discuss the history of the internet, the need for a standardized communication protocol and the creation of IPv4.

How many IP addresses are left? This may seem like a silly question, and it’s something that most internet users don’t think about, but this is a question that could decide the future of the Internet. Over the next few days, in recognition of the activation of a solution to the biggest problem facing the Internet, I will be writing a guide to the Internet Protocol– better known as IP, the backbone of all communication across the internet. This guide will give an overview of the history of the protocol, how it functions, an overview of IPv4, and an overview of the new IPv6, which went live on June 6th.

First, we’ll touch on the history and inner workings of the Internet Protocol. What is it? Why was it created? And how does it work?

History of the Internet Protocol

To understand the origins of the Internet Protocol, we first need a quick refresher course on the history of the internet as a whole. What we know today as the Internet was originally called the ARPANET. Run by various universities and the US military, its purpose was to relay data between various computer centers across the country. Established in 1969, ARPANET grew over the years because of the falling costs of computers in both money and space, allowing more universities and organizations to house their own computers. ARPANET eventually grew to the point where other networks, like the UK’s Mark I, began to be connected to the growing network of networked computers, leading to what we know of today as the Internet, short for “inter-network.”

In order for communications to be passed along the network, however, an addressing system was needed. Otherwise, a signal being sent to UCLA might be mistakenly sent to Los Alamos or MIT, leading to confusion and an inability to use the network for anything. Various methods were proposed, but the method which eventually became the standard was a suite of protocols known as the Transmission Control Protocol/Internet Protocol Model, Internet Protocol Suite, or TCP/IP.

The Transmission Control Protocol offers an intermediary between applications looking to send data over the Internet and the IP packets; rather than an application having to break its data into packets with proper formatting and collision/signal loss detection, TCP handles all of that, allowing applications to simply send and receive data.

The Internet Protocol is the meat of modern networked communications. IP works by creating a packet of information containing a header and the data being sent. The header includes information such as the destination and, optionally, what servers to route through on the way to the destination. It also includes the source address for the message and any other information required, such as “TTL,” or Time To Live, a timer that tells a server when it can ignore a packet coming in as having taken too long to reach its destination.

The beauty of the IP’s design is its assumptions about reliability. The IP specifications assume at all times that any given network is unstable, and that any signal may be lost at any time. This means that IP is a connection-less architecture, as opposed to a connection-oriented architecture. While this does provide some challenges, such as packets arriving out of order, data corruption not being checked, and packet duplication, it also allows for some amazing technical features, such as the near-indestructibility of the internet as a whole.

As a side note, this has led to a recurring urban myth that the Internet was designed to withstand a nuclear war, which is completely false; the fact that the Internet would most likely survive a nuclear war (at least as far as the infrastructure is concerned) is a happy coincidence of its natural expansion and redundancies. Another factor is the automatic, dynamic routing of traffic over the internet through the use of the Internet Protocol Suite, making it nearly impossible to cut off one area of the internet from another. For more information on the durability of the Internet, feel free to read my previous article about it here.

In 1974, the Institute of Electrical and Electronic Engineers (IEEE) published a paper by Vint Cerf and Bob Kahn entitled “A Protocol for Packet Network Intercommunication,” which laid the groundwork for the TCP/IP model. IPv0 through IPv3 were developmental versions used between 1977 and 1979. The most widely-used version of the protocol, known as IPv4, was officially standardized with the release of RFC 791, in 1981. Next time, we will discuss the specifications for IPv4, how it routes traffic to various servers around the globe, and the problem that its creators never saw coming.

VR-Zone is a leading online technology news publication reporting on bleeding edge trends in PC and mobile gadgets, with in-depth reviews and commentaries.

Leave a Reply

Your email address will not be published.

Read previous post:
Researchers Create Gigapixel Camera

Researchers working for DARPA have announced the creation of a portable gigapixel camera, with technology that should allow for 50-gigapixel cameras.