by Felix Stalder
[This is the raw version of an article that was published as "The Age of Media Autonomy" in Mute, # 26, June 2003]
Collaborative media are emerging as an alternative form of media production uniquely suited to the Internet. Whereas broadcast media are becoming more and more homogenized and closed, collaborative media are filling an existing void and experiment with the still largely untapped possibilities of new forms of media production. Central to their development is the task to create models of openness that facilitate collaboration even though the general environment is often very hostile.
Concentration and Homogenization in Mainstream Media
Over the last decade, the landscape of mass media has been deeply transformed and is now characterized by technological convergence, concentrated ownership and homogenization of content. Previously distinct production environments and delivery channels collapsed into a unifying stream of 0s and 1s. You can now listen to the radio on your computer and receive news headlines and images on your cell phone. Deregulation has allowed this to be matched and advanced by a concentration in ownership, integrating previously independent media companies horizontally (multiple TV stations belonging to the same company) and vertically (a single company controlling the entire chain from production to distribution across various media). The outcome of all this activity is a more or less globally integrated media system dominated by less than ten transnational giants, most importantly, AOL Time Warner, Disney, Bertelsmann, Vivendi Universal, Sony, Viacom and News Corporation. Together they own all the major film studios, cinema chains, and music companies, the majority of the cable and satellite TV systems and stations, all US television networks and the bulk of global book and magazine publishing. Despite the recent decline of their market evaluations, nearly all of those firms still rank among the 300 largest corporations in the world. Barely 20 years ago, most of these companies did either not exist or were certainly not even among the 1,000 largest firms in the world. The concentration of ownership despite its global ambitions, is regionally uneven. It's more pronounced in the US than in Europe, where print media ownership is still fragmented and the public broadcast system remains broadly supported. However, media content across the board has become more homogeneous. The main reason is that the dependence of all mass media, private or public, on advertising revenue. Consequently, they all need to the attract the one market segment most interesting to advertisers: the affluent, young middle-class, predominantly but not entirely white.
The result is a relatively homogeneous, self-referential global mass media space that has effectively been closed off to those opinions which are critical of its structure and to those issues not likely to attract the target audience. Hence, we have endless stock market updates as if the majority of the audience were stock brokers (it certainly helps that this kind of 'news' is exceptionally cheap to produce). Of course, very few people actually need this kind of 'by-the-minute' reporting. However, as Thomas Frank argued in his book “One Market Under God”, the myth that the stock market is for everyone is as pervasive as it's false. This highlights the way in which media frame consciousness, as a powerful reinforcer of conformity on all levels. By framing politics, for example, exclusively as partisan politics, precludes any fundamental discussion since only those who profit from the current system – the small number of parties among whom power rotates – are allowed to speak. Similarly, the focus on middle-class, nuclear families reinforces stereotypes of normality and marginality.
Particularly the latter point was targeted during the 1990s. Minorities tried to get "fair" representation of their particular identities in mainstream media. To some extend, this has been successful as some of them – gays, for example – were discovered as profitable market segments and easily integrated into the advertisement-driven logic. TV became more "colourful" at the same time as the diversity of opinions it aired decreased. The "politics of representation", by and large, failed as a progressive strategy. The other approach, mainly in the US, to construct alternative information channels on cable TV or radio has been only somewhat more successful, not the least because they could reach only relatively small local audiences (with the exception of NPR and PBS in the US) and also because the economics of mass media production are not favourable to low-budget projects.
Against this backdrop of a mass media system more closed then ever, i.e. controlled by powerful gatekeepers able to restrict what can be transmitted through it , the focus of progressive media project has shifted to the Internet, first as a delivery platform and, increasingly, also as a unique production environment.
Internet: Architecture and Code
The Internet's potential as an open media space – where access to the means of production and distribution are not controlled centrally – is based on the particulars of its design (architecture) and its implementation (code), as Lawrence Lessig has argued extensively. On the level of architecture, the “end-to-end” (e2e) principle levels the playing field. The idea behind this is straight-forward: keep the network simple by pushing the “intelligence” to the periphery. In practice this means that the networks treats all traffic indiscriminately, all it does is to route traffic from one end to the other. Only the machine at the periphery, where someone is watching a video stream, for example, does the critical work of differentiating between different kinds of data. To the router responsible for getting the content across, its all the same: an endless stream of packets where only the addresses of destination and origin are of interest. What the e2e principle guarantees, is that, technically, anyone has the same potential for getting his/her content across the Internet. This applies to content within a given format – say an Indymedia web page vs. a CNN web page – but also also across formats – an email message vs. a mp3 file – and, very importantly, extends to currently unknown formats. Bandwidth constraints not withstanding, the architecture of the Internet has been designed – at least on this level – to be highly egalitarian and non-discriminatory. If Gertrude Stein were a network engineer, she would say a packet is a packet is a packet.
In order to take advantage of this potential, it's important that the protocols – the language in which machines speak to one another – are freely accessible. The Internet's early engineers understood this and very consciously placed the key protocols (TCP/IP, SMTP, HTTP etc) in the public domain so that anyone could build applications based on these standards. It is the combination of a network that does not discriminate which content it transports and the free availability of the key protocols that allowed many of the most interesting innovation of the Internet to be introduced from the margins without any approval of some central authority, be that a standard setting body like the W3C or ISOC, or a governing body like ICANN.
Of course, not everyone is happy with such easy access to the global information networks. A powerful coalition of business and security interests is working hard to gain control over this open infrastructure and, effectively, close it off. So far, they haven't been successful, though there's nothing that guarantees that they will not succeed in the future.
Collaborative production: open publishing
Among the first and still most advanced projects for collaborative content production are those focussed on open publishing. By open I mean that the bulk of the published content is provided by a distributed group of independent producers/users who follow their own interests, rather than being commissioned and paid-for by a editorial board and created by professional producers. There are a great variety of open publishing projects, a few of which will be discussed later on, but they all have to contend with a fundamental problem: on the one hand, they need to be open and responsive to their users interests, otherwise the community will stop to contribute material. Only if users recognize themselves in the project will they be motivated to contribute. On the other hand, the projects need to create and maintain a certain focus. They need to be able to deal with content that is detrimental to the goals of the project. In other words, the noise needs to be kept down without alienating the community through heavy-handed editorialism. The strategies how to create and maintain such a balance are highly contextual, depending on the social and technological resources that make up a given project.
Email Lists: Nettime
The oldest and still widely used collaborative platforms are simple mailing lists. Among those, one of the oldest and most active is nettime, a project I know intimately as a co-moderator for the last 5 years. It was started in 1995 to develop a critical media discourse based on hands-on involvement in and active exploration of the emerging media spaces. Its original constituents were mainly European media critics, activists and artists. Over the years, this social and regional homogeneity was somewhat lost as the list grew to close to 3000 participants.
An email list is, fundamentally, a forwarding mechanism. Every message sent to the list address is forwarded to each address subscribed to the list, and consequently, everyone receives the same information. This is a broadcast model, with the twist that everyone can be a sender. Unless the participant base is socially homogeneous and more or less closed, noise will be an issue, be it only because different people have different ideas what the project should be. However, for individual subscribers there is no effective way to modulate the flow of messages to make it conform to their idea of the project. The issue of moderation, in some form or shape, is fundamental to all community-based projects, as it raises the question how to enforce community standards. The platform of the email list offers an extremely limited set of choices how to create moderation. The only way is to have all messages sent to the address go to a queue. The moderators who have access to this queue can then decide which message gets forwarded to all subscribers and which does not. The platform differentiates only between two social roles: normal subscriber and moderator. There is nothing in between. Subscribers see only those messages that moderators approve. Due to the broadcast model of the information flow, the moderation process needs to be closed.
Of course, this creates conflicts over questions of what really is the community standard, often expressed as an issue of censorship. Nettime, rather than upgrade the platform, opted to deal with this problem by creating a second, unmoderated channel, also archived on the web, to serve as a reference, so that everyone who wanted to could see all messages. The social sophistication of this technological choice was low. It addressed only a single concern – the lack of transparency in the moderation.
In the end, the lack of technical sophistication can only be compensated by socially, by trust. The community needs to trust the moderators to do their job in the interest of the community. The “blind” trust is checked by the need of the moderators to keep the community motivated to produce content.
Collaborative News Analysis: Slashdot
Slashdot – founded in September 1997 – is a web-based discussion board, initially populated by the softer fringes of the US-hacker culture, but now with a global appeal, though still clearly US-centric. Contrary to most other projects, it is owned by a for-profit company, OSDN, and has a small, salaried staff, mainly for editorial functions, management, and technical development. Slashdot's culture has been deeply influenced by two of the central preoccupations of (US-) hackers: hacking, that is making technology work the way one wants, and a libertarian understanding of free speech. The two interests are seen as heavily intertwined and are reflected in the still ongoing development of the platform.
There is a sort of implicit consensus on the Slashdot community. On the one hand, not all contributions are of the same quality and most people appreciate having a communication environment where noise is kept at a sustainable level. On the other hand, different people have very different ideas as to what constitutes quality and which level of noise is sustainable. The phenomenon of “trolling” – posting comments just to elicit controversy – is highly developed on Slashdot and has fosters several subcultures with their own particular charms.
The first realization requires some moderation facility, the second requires that individual users can modify the results of the moderation to fit their own needs.
Being hackers favoring practical solutions over ideological debates – which still hamper most indymedia sites when dealing with the issue of free speech vs quality control / community standards – they set up to create what is today one of the most sophisticated moderation mechanisms for open discussion environments. Basically, there are two rating mechanisms, one done centrally on site, and one decentrally by each user. A team of moderators, selected automatically based on the quality of their previous contributions, rates each comment multiple times. The resulting average constitutes the rank – expressed as a value between -1 and 5.
Each user can define individually messages of what rank to display, for example, only comments rated 3 and above. In addition, each users can white- or blacklist other users and so override the moderation done on site and publish what is called a journal where s/he is in full control over the content.
Slashdot is highly user-driven. Not only in regard to the content, but also as to giving the users the ability to determine what they want to see and how, without affecting what others can see. While one users might choose to see nothing but the most highly ranked comments within a particular category, another user might relish to see all posts in all sections. Slashdot has managed to create a forum with more than 500,000 users in which rarely a comment is ever deleted (usually a court order is necessary for this) without it becoming a useless mess into which the unmoderated nettime channel declined. This is largely due to the greater social sophistication of the platform and the flexibility to modulate the flows of texts.
Collaborative Text Editing: Wikipedia
Wikipedia is the poster child of the open content community. It is a collaboratively developed encyclopedia, based an custom-designed version of a Wikiweb. Distinguishing feature of a wikiweb is that virtually all pages can be edited easily, on the spot by any user. As such it comes close to Berners-Lee's original plan of the web where users were envisioned to be able to edit what they see (this is why he made the source code visible). Founded in January 2001 Wikipedia has reached 100.000 article in February 2003 and continues to grow at rate of some 200 articles per day (in October 2002), the exact number depends on how one defines 'article.'
Of course, the number of articles is only one indicator, and a very unreliable one at that. It does not even indicate how comprehensive the project is – not all subject areas are equally represented – nor does it indicate the quality of the articles.
For quality control, Wikipedia relies on a principle widely used in the Open Source community and famously expressed by Eric S. Raymond as: “Given enough eyeballs, all bugs are shallow.” This means that errors that one persons introduces are likely to be spotted by someone else. The more people can look at it, the higher the chances that someone will find it. And if the possibility exists, someone will correct it. Based on this assumption, Wikipedia hope to grow not only in breadth, but also in depth, by continuous improvement of articles. So far, this hope has been well, though unevenly, founded.
In addition to this, there is currently a discussion about introducing an approval mechanism by which “experts” in a subject area can rate articles as “approved” hence reliable information. However, apart from the problem how to validate an expert, introducing a distinction between experts and normal users would probably upset the egalitarian ecology of the project. It would require a more explicit hierarchy to maintain such a distinction. So far, the problem of quality control has not appeared significant enough to warrant dealing with such thorny issues beyond general discussion.
Of course, Wikipedia is not an open 'free-for-all'. There are elaborate social and technical mechanism to balance openness with keeping the project on track. On the social side there are detailed policies on what the project is about – collaboratively writing an encyclopedia under the GNU Free Documentation License – and what it's not – a dictionary, a discussion board etc. There are also elaborations on how to write an article, examples of what are regarded as exceptionally well-written ones as well as a set of 'conventions' to increase consistency for things like titles. However, these rules and policies are not binding in any formal way, rather, they serve as orientation for new comers and as points of reference when dealing with conflicts.
Each of these projects – nettime, slashdot, wikipedia – has been successful in hostile environment because each has found its own, distinct way of keeping the project open enough to attract interesting content while keeping deliberate or automated disruption coming out of the larger environment to a mimium.
Collaborative infrastructures: Peer-to-peer Networks
The particular openness of the Internet does not only allow to create new applications which can be freely introduced within the framework of the existing architecture, but also to create alternative architectures either above or below the TCP/IP level. Collaborative distribution platforms take advantage of that by turning a decentralized client-server structure into a truly distributed peer-to-peer network. Changing the architecture that resides on top of the TCP/IP level is the approach taken by peer-to-peer file sharing systems in their various configurations. The problem of the file sharing systems is less one of signal to noise, even though one of the counter strategies of the content industry to disrupt the systems is to flood them with large junk files, hence introduce noise into a system that otherwise has been remarkably noise-free.
The hostility of the environment of file sharing system, then, is not on the level of noise, but on the level of legality. There are two strategies of dealing with that. The mainstream approach is to develop a system that keeps so-called illegal content out. Napster Inc., after loosing a series of court trials, was forced to go into this direction and develop a system that would reliably keep material out that infringed on someone's copyright. Given the complexity of the copyright situation, this is a nearly impossible task and Napster could not fulfill this order and completely disintegrated as an company and technical system. Others have stepped up but suffered either a similar faith or are likely to encounter it in the future. At this point, it seems simply impossible to create on open distribution system that can co-exist with the current restrictive IP regimes. Consequently, most commercial interest has been refocussed on building closed distribution systems based on various digital restriction management systems (DRMs). This does not mean that there are no more collaborative, peer-to-peer distribution channels anymore. However, their approach to surviving in a hostile environment has been to devolve to such a degree that the entity which could be dragged to court disappears. Without a central node, or a company financing the development, there's less clarity who to hold responsible. Truly distributed file sharing systems like Gnutella are one approach, though there are still significant technical issues to be solved before the system becomes fully functional on a large scale.
Freenet, the peer-to-peer network for anonymous publishing, has chosen another approach. Here content is never stationary, in the way that URLs are stationary, but it moves around from node to node within the network, based on demand. Consequently, its location is temporary and not indicative of where it has been entered into the system. With all content being encrypted, the owner of a Freenet node can reasonably claim not to have knowledge of the content stored on her node at a particular time, and thus avoid the liability of an ISP which is required by law to remove objectionable content when it comes aware of it. So far, the strength of this strategy of shielding the owner of a node from the liability for the content stored as not been tested in the courts, as the entire system is still embryonic. However, it is at least an innovative conceptual approach to keeping the network open and robust against (legal) attacks.
Changing the architecture that resides below the level of TCP/IP level is the approach taken by the slowly developing wireless community networks such as London's consume network. The basic idea behind wireless community networks is to substitute the infrastructure of the commercial telecom firms as the basis of data flows with a distributed infrastructure of wireless points that route traffic across a chain of nodes maintained by a (local) community. This would allow to create a networks that is entirely open (within the community) but has none of the traditional constraints – be it legal or bandwithwise – that characterize conventional network architecture. This is a relatively new approach as so far has not yet come to fruition. Consume has chosen a bottom-up approach in which the individual community members were to maintain their own nodes. Technical difficulties, however, have pr oven to be a very substantial hurdle for all but the most dedicated geeks. In a environment already highly saturated with connectivity, this has been (near) fatal. Consume has not yet managed to gain critical mass, i.e. to become a real community.
The different approach was taken in Berlin. They chose to rely on a commercial provider to plant and maintain the wireless nodes and use the community as free beta-testers. However, in the currently harsh economic environment, the willingness of the provider to support a non-commercial project with only limited advertisement potential dried up quickly and the project collapsed.
It is too early to say whether or not wireless community networks are doomed to become entries in Bruce Sterling's dead media list or if they will take off under the right circumstances. However, what they show is that collaborative infrastructure can also be started at the hardware level and changing the Internet at its most fundamental, while still being part of the larger world speaking the Esperanto of TCP/IP.
The potential of collaborative media is substantial. One the one hand, the mainstream media landscape is bland and excludes such a significant range of the social, cultural, and political spectrum that there is a broad need to have access to a different means of producing and distributing media content. The potential is even greater because the flexibility of the Internet allows not only “alternative” content to be created and distributed, but it offers new model of doing just that. The issues, then, are radically different than creating an alternative newspaper. The structure of the advertisement dominated market hardly gives room for such an enterprise to succeed. The real potential is to create a new model of media production / distribution that is not subject to the traditional economic pressures but centrally relies on collaborative, distributed efforts to allow new subjectivities to emerges. Collaboration based on self-motivation – rather than on hierarchical assignment – needs openness to reach a scale in which the out-put can really match those of traditional media production. The open source movement has already achieved this is some areas and is close to in others. Slashdot as a point of publication has achieved the same, or even a higher level of visibility than traditional technology publishers. Wikipedia, at the least, has the potential to become a serious competition to popular commercial encyclopedias. However, the need to sustain openness in a hostile environment demands further innovation in social organization and technological tools. The danger is that openness becomes increasingly restricted to closed groups, fragmenting the collaborative media landscape into self-isolating groups whose cultural codes become increasingly incommunicable. The potential, however, is to give meaning to the vaporous term that has been tossed around lately of civil society becoming the “second super power”. This, however, will not happen unless we have a media infrastructure that provides a structural alternative to the media dominated by the powers that are.