TCP/IP on Ethernet are packet switched and end-to-end protocols. In the good old days, every peer had a public address and was therefore a full fledged member of the network. Technologies like e-mail, news groups, chat, the web were born from this architecture. In the intervening years, the rapid ubiquity of Internet access has started technical and social movements that we are still coming to terms with. A side effect of those two early design decisions has rapidly become a new hope for free and democratic communications.
It never could last.
The predecessor to the Internet, ARPANET, had no consistent method of converting the name “google.com” to the address 126.96.36.199. Instead, each machine had a Hosts file which acted as a directory of names and addresses. Over time, the most important names and addresses were collected into a single Hosts file managed by the Stanford Research Institute. Almost every computer on the ARPANET would download the latest Hosts file on a regular basis. However, this solution became unworkable when the number of computers on ARPANET started numbering in the thousands. The Domain Name System (DNS) was born as a long-term solution shortly after advent of TCP/IP and the Internet. The practical result has been a strong centralization and management bureaucracy of any peer who could be easily recognized as a full-fledged member of the Internet.
For the first decade and a half of the Internet’s existence, networks and the computers in them were assigned addresses from one of three classes: A, B, and C. There were 126 potential class A networks with 16,277,214 addresses per network. Conversely, there were 2,097,152 potential class “C” networks with 254 addresses per network. The idea of this segmentation was to allow simplicity in the computers, called routers, helping data move between one computer to another. But, in response to unexpected rapid growth, the method was changed in the mid-1990s to something significantly more complex. Now, those routers are also complex, bigger, and more expensive. That transition created a new economy and allowed companies like Cisco and 3Com to grow and prosper. But, the practical result was a strong centralization and management bureaucracy for any peer that could be a full-fledged member of the Internet.
Your ISP hates you
The physical hardware that the Internet works on is owned and operated by hundreds of companies working together. At the top are the Tier 1 network service providers - they manage the high capacity connections and core routers connecting the world together. They work together and provide access to the entire Internet. Each provider freely passes along data from their other Tier 1 “peers” at no charge because their “peers” do the same for them. At the next level, the Tier 2 providers “peer” with each other but also pay the Tier 1 providers for access to portions of the Internet they cannot reach through “peering” alone. It is important to note that Tier 2 providers pay for the data they send to and receive from Tier 1 providers. Finally, at the bottom, a Tier 3 provider is what we think of as Internet service providers (ISPs). They purchase access to the Internet from Tier 2 and Tier 1 providers and pay for every byte they send and receive - and pass the savings on to you and me.
An ISP’s business model is based on a concept called “over-provisioning.” They offer their customers connections at particular speeds - but the total amount of bandwidth they purchase from their providers are less than what would be required if all their customers used their advertised speed. I’m promised 1.5 Mb by Clearwire, but they rely on the fact that I and everyone else almost never use that much bandwidth in order to give it to me the rare occasion I do need it. But, P2P and Bittorrent have begun invalidating this assumption. Before, the worst customers would download MP3s from Napster or a FTP site. Now, the worst customers download and then continuously upload MP3s, movies, and software. ISPs costs changed from a single download to a reoccurring cost of uploading! This is why when you purchase broadband access, the download speeds are always faster than the upload speeds - ISPs profit from your online experience being primarily as a consumer.
The Internet is running out of IP addresses. The long-term solution for this problem is to move everyone to the next version of IP - version 6. But, one short-term mitigation has been the introduction of network address translation (NAT) which allows for a single address to be used for multiple computers. This what your home router uses to allow all computers to share a single connection. Supporters of a pure Internet with end-to-end connections hate NAT and rant because its fatal flaw is that the computers inside cannot be independently contacted. This is similar to corporate phone systems with a single phone number - you can call out easily, but for someone to reach you they have to call in and know your extension.
Scaremongering begins here
The Internet used to be a medium where everyone could contribute. There was always the privilege, if you thought everything else was wrong and sucked, to start up a web server on your own computer and the world could come look. But, the Apples, Microsofts, Ciscos, Time-Warners, AOLs, Earthlinks, Clearwires, Ciscos, United States’, European Unions, North Koreas, and almost any other entity with a finger in the Internet pie except individuals stand to gain from hampering your ability to bidirectionally communicate. The centralization and bureaucracies involved in managing the Internet have opened up new industries of fraud, censorship, and monitoring that collectively stand against end-to-end communications. If you only use the Internet to download, then you lend support for a consumer Internet.
On a consumer-only Internet, both copyright violators and Chinese dissidents would be gagged.
Realistically, the Internet will never be another television or radio. The bidirectional feature enabled the explosion of new technologies, content and communications we enjoy today. But, can you tell me any real rationale the dominant interests have in keeping the scale as far toward the consumer side as possible?