The idea of a tiered Internet, with “diamond lanes” for heavy commercial services like web TV and IP telephony, is a key contemporary issue – not only technically, but democratically. The issue is not, however, entirely without irony…
New Internet services like Joost and YouTube are about to exceed the capacity of the underlying Internet backbone of cables and switches. Only the other week, Google themselves warned us that the Internet as it stands today isn’t suited for TV. Therefore they want to cooperate with the cable operators, who have earlier been frightened that companies like Google would take over the lucrative market for Internet TV. The term “net neutrality” was one of last year’s buzzwords in the US, in the debate about whether network operators would be allowed to appropriate parts of the Internet infrastructure and create “diamond lanes” dedicated to heavier traffic like, particularly, web TV. Many, including Microsoft, Google and Yahoo, were strongly critical towards the idea, and instead advocate a form of net-neutrality where telcos and cable operators would not not be able to decide whose data should flow faster or slower. They want to legislate for all Internet traffic being equally treated; this is most of all in the interest of Internet companies like the abovementioned, since these otherwise would risk to pay extra for being allowed to utilize these “diamond lanes”.
What has to be noted is that the debate is, without exemption, all about services like TV, games and IP telephony — that is, transmissions which require to be instantaneous and which are essentially based on flows of data (streaming), as opposed to transmissions of objects (file-sharing). These instantaneous services are typically suited for rigorous DRM; essentially copy-protection and fees. The self-imposed challenge that the network operators say they have ni front of them — to “upgrade” the Internet backbone — is, in other words, a shortcut to increased commercialisation of not only the infrastructure but also the very Internet experience in itself.
This is why so many file-sharers, who often see themselves as defenders of the digital commons or even as anti-capitalists, strongly oppose this tiering of the Internet, and instead propose net neutrality.
There is, however, an ironic contradiction here.
Because file-sharing spokesmen (like for example Piratbyrån representative Rasmus Fleischer) often duly criticise another form of neutralism, more specifically the form of technological neutralism which purports to have an equilibrium of laws applying across the board for all types of communication, regardless of its form and technical logic. The present, debated EU law proposal on data retention for example presupposes, somewhat simplified, that nation states should be enabled the rights to wire-tap sensitive communication networks, analogue as well as digital ones. To equate these fundamentally different technologies is technological neutralism, and it is fundamentally naïve when applied to ontological differences between technologies: just think about how easy it is to encrypt digital communication compared to analogue, or how hard it is to overlook p2p-based file-sharing compared to radio channels. Note how questionable it really is to implement a TV license for computers, or to implicate blogs and web services into a regulation system originally devised for magazines and newspapers.
This later form of technological neutralism is therefore seriously misleading, Fleischer argues, and many with him. But what happens if we flip the arguments, and turn some of his remarks on this type of neutralism against the abovementioned “net neutrality” debate?
The net neutralists — that is, the file-sharers who oppose the commercial severing of the Internet backbone — then appear directly nostalgic: the argument that the Internet must remain what it always has been itself presupposes a form of Platonic ideal world, to which habits and traditions connected to established media gradually can be allocated and be given “eternal” appearance. One example would here be the assumption that there is an “Internet idea,” which is later manifested through different media technologies (wireless, ADSL, fibre optic direct lines…). These technologies are seen as different variations of the same idea, and their respective differences are glossed over; the Internet backbone is assumed to serve heavy BitTorrent junkies as well as occasional e-mailers, and nothing will ultimately change this.
Net neutralism is in that sense a way to repress, or to postpone that which is unmanageable to the future. It is a safe assurance that the relative statelessness, facelessness and independence from commercial actors that the base structure of the Internet has been able to gain from will remain.
And I can definitely understand them. But the problem is that current services like for example BitTorrent after all create bottlenecks, which affect all users when some users download terabytes of pornography and computer games. The Internet isn’t a superhighway, it is pipes after pipes which all have limited capacity. When they are full, they are full. And you have to wait. If the large majority of consumers want to have that mythological future, repeatedly said to be around the corner, where we can have immediate video conferences and get streams of (undeniably DRM-packaged) HD video directly into our living rooms, well then we also have to grab the bull by the horns and realize that the options we make today in fact might be a side-track which would take us far from the initial idea of the Internet, historically speaking.
Once again an example of how the globalization-critical Left actually often constitute the conservative, retrospective voice in today’s doubtlessly increasingly commercialised society. The issue has been referred to as a democratically central one, but the self-interests shine through — not only Internet companies and network operators in-between, but between those who want to have an Internet where many file-share “for free” (exploiting the system, in other words*) and thus push the existing, communal network to its limits, and those who happily want to bring us to a far more sequestered space. Do we want to pay the expenses for an Internet which more and more will resemble those commercial futuristic scenarios where homes have screens on every available surface and are constanly online, on interminable credit? Is this an unavoidable development when file-sharing otherwise risks to eat up all the capacity that the communal lines has to offer? Have the positions actually been advanced that far?
*) This is of course a deliberately tendentious statement. One could just as well say, given the original Internet peer-to-peer structure of the Internet, that the file-sharers are utilizing the very specific technical strength of the Internet (asynchronous packet transfer, as opposed to DTM) — but then we easily end up on square one all over again: the notion of a form of “Internet idea,” or “built-in logic” specific to the Internet.