The latest internet storm is the hubbub about Google’s and Verizon’s Joint Policy Proposal for an Open Internet. Essentially, wired networks aren’t allowed to prioritize traffic, wireless is interesting and unique and should be decided later, and the FCC should watch over things.
There are little points here and there. I encourage you to spend an hour reading it over, word for word, to discover for yourself what it means, because most of the internet has it wrong (quite like the iPhone 4 antenna issue, which has turned into ‘antennagate’, which is better described as ‘antennapaloosa’).
So what does that Google document mean? It’s a lot of high-level language with broad generalization and sets a framework for future law-makers.
To clarify, read Brian Fling’s Google, Verizon and an “Open Internet” from a Mobile perspective. He tells us all how wireless networks are different from wired networks (and believe me, they are).
So!
When you get right down to it, what can we really do?
- Cap bandwidth
- Charge per-gigabyte
- Create a tiered system of separate internets
- Charge more all round
- Destroy Hollywood
- Create a foundation for peer-to-peer networking
- Others
You can’t cap the bandwidth. Throttling might be fine, but most people get so little, anyway, and they can still download hundreds of gigabytes a month. Frankly, fast bandwidth is necessary, or you’ll spend hours a day waiting for your pages and things to load.
I’d be all for charging per-gigabyte, except that providers invariably would charge too much. I would love to believe they could do some research to find out how much people need, and then create a simple stepping chart of prices, but they are either incredibly stupid or are greedy liars—they say you can visit so many webpages and get so many emails with whatever bandwidth, when really you can’t.
See the Rogers Data Calculator. It looks like they’ve made some improvements, lately, but they still say a webpage is 289 KB. I wish that were true, but it seems a lot of pages I visit are several megabytes in size. (Still, it seems fair, right now.)
A good per-gigabyte system should start off small, because basic internet access is very important. If people really want to torrent a bunch of videos, games, and music, they can pay for all that. Imagine if you could get a basic internet connection for $5 or $10 per month.
Obviously, creating a tiered system wouldn’t work, because everyone would have to pay a premium just to get basic bandwidth (although you already do). Worse, this would extend into the internet itself, and you’d subscribe to certain sites the way you subscribe to cable channels. Imagine only being allowed to go to the most famous and corporate sites.
Charging more all around won’t work, because they’re already charging us exorbitant amounts for relatively pitiful network connections.
Utterly destroying Hollywood and hunting down any famous musicians would reduce the amount of traffic downloading torrents. This obviously isn’t going to happen. (Though it would make a great movie!)
Most of the network traffic is duplicates: mailing lists, illegally downloaded movies, millions of upgraded FireFox installers, and the most popular YouTube videos.
If we could lay some protocols, programs, and infrastructure to allow local copies to be shared between local machines without hampering the network, it could really reduce the amount of bandwidth being used for large files.
CDNs already take care of the top-level branches, where the internet backbones would otherwise need to duplicate content, but that data still needs to be downloaded separately each time someone in that region requests the file. We need a way for this content to be managed AFTER the end-of-line provider downloads it.
It would basically be a Fractal Internet. The big pipes shuttle to the smaller pipes, which will share with the even smaller pipes. I don’t know if any research has been put into this, yet.