How would culture be created if artists were not locked into romantic notions of individual authorship and the associated drive to control the results of their labour was not enforced through ever expanding copyrights? What if cultural production was organized via principles of free access, collaborative creation and open adaptability of works? As such, the practices of a collective and transformative culture are not entirely new. They were characteristic for (oral) folk cultures prior to their transformation into mass culture by the respective industries during the twentieth century, and as counter-currents – the numerous avant-garde movements (dada, situationism, mail art, neoism, plagiarism, plunderphonics, etc.) which re-invented, radicalized and technologically up-graded various aspects of those. Yet, over the last decade, these issues – of open and collaborative practices – have taken on an entirely new sense of urgency. Generally, the ease with which digital information can be globally distributed and manipulated by a very large number of people makes free distribution and free adaptation technically possible and a matter of everyday practice. Everyone with a computer already uses, in one way or the other, the copy & paste function built into all editors. This is what computers are about: copying, manipulating and storing information. With access to the internet, people are able to sample a wide range of sources and make their own works available to potentially large audiences.

More specifically, the free, and open source software (FOSS) movement has shown that it is possible to create advanced informational goods based on just these principles. They are enshrined as four freedoms in the General Public License (GPL), the legal and normative basis of much of this movement. These are, it is worth repeating: freedom to use a work for any purpose, freedom to change it, freedom to distribute exact copies of it, and freedom to distribute transformed copies. These freedoms are made practicable through the obligation to provide the necessary resources; for software, this is the human-readable source code (rather than just the machine-readable binaries, consisting of nothing that ones and zeros). After close to two decades of FOSS development it has become clear that it embodies a new mode of production, that is, a new type of social organization underpinning the creation of a class of goods. To stress that this mode of production does not need to be limited to FOSS, Yochai Benkler has called it ‘commons-based peer production’[2] meaning that the resources for production e.g. the source code are not privately owned and traded in markets, but managed as a commons, open to all members of a community made up of volunteers (those who accept the conditions of the GPL).

It is perhaps not surprising that such a ‘really existing utopia’ has had a strong attraction for cultural producers whose lives are made difficult by having to conform either to the demands of the culture/creative industries, or the traditional art markets. Thus over the last couple of years, we have seen an explosion of self-declared ‘openness’ in virtually all fields of cultural production, trying, in one way or the other, to emulate the FOSS style of production, usually understood as egalitarian and collaborative production.

However, despite all the excitement, the results have been, well, rather meagre. There are plenty of collaborative platforms, waiting to be used. Those that are used often produce material so idiosyncratic that they are of relevance only to the communities creating them, barely reaching beyond self-contained islands, always at the brink of collapsing into de facto closed clubs of the like-minded. There is only one example that springs to mind of something that has reached the size and impact comparable to major FOSS projects: Wikipedia, the free online encyclopedia.

The exceptional status of Wikipedia suggests that the FOSS model is not easily transferable to other domains of cultural production.[3] Rather, it seems to suggest that there are conditions which are specific to software development. For example, most software development is highly modular, meaning many people can work in parallel on self-contained aspects with little coordination between them. All that is necessary is to agree on certain standards (to make sure the various modules are compatible) and a loosely-defined direction for the development. This gives the individual contributors a high degree of autonomy, without diluting the overall quality of the emergent result. This, of course, does not apply to literary texts, films, or music, where the demands for overall coherence are very different. It’s not surprising, then, that we still have not seen, and I would suspect will never see, an open source novel.[4] Another important aspect in which software development differs from most cultural production is its economic structure. Around three quarters of professional programmers (meaning people who are paid to write code) work for companies that use software but do not sell it.[5] Commodity software (à la Microsoft) has always been only a small aspect of all software that is produced and the overall sector has always been oriented towards providing services. Hence, it’s easy to imagine an industry providing an economic basis for long-term FOSS development. And such an industry is emerging rapidly. Of course, artists, for very good reasons, are reluctant to accept a service model forced upon them under the label creative industry,[6] leaving them dependent on either the traditional art market, or the limited commissions handed out by public and private foundations. There are numerous other aspects that differentiate the problem of software development from other domains of immaterial production. I’ve sketched them elsewhere.[7] In the context of self-directed cultural or artistic projects, one issue seems to pose particular difficulty for open projects: quality control.

What’s Good, And Who Is Better?

What makes a work of art a good work of art? How can we reliably judge the ability of one artist as comparable and superior over that of another? These are intractable questions that most people, even art critics, try to avoid, for very good reasons. Throughout the twentieth century, the definition of art as been expanded continuously to the degree that is has become self-referential (à la “art is what artists do”, or “art is what is shown in art institutions”). As an effect of the ensuing uncertainty, aesthetic judgements are more than ever uncertain and therefore subjectivized, and the range of aesthetic preferences is extremely wide. The differences among genres, even if they can seem to be minuscule to outsiders, tend to be very significant for the ones who care. The result is that the number of people who share a sense of what makes a cultural product high-quality is usually very small. Except, of course, if the product is supported by massive marketing campaigns that artificially inflate this richness of opinion into mass markets. Thus cultural communities are either highly fragmented or commodified, making collaboration either exceedingly difficult or illegal.

In software, this is different. It is usually not so difficult to determine what is a good program and what is not, because there are widely accepted criteria which are objectively measurable. Does a program run without crashing? Does it do certain things that others don’t? How fast is it? How much memory does it use? How many lines of code are necessary for a particular feature? But it’s not just that technical questions are ‘objective’ and cultural ones are ‘subjective’. In order to be able to seriously contribute to a FOSS project (and therefore earn status and influence within the community) one needs to acquire a very high degree of proficiency in programming, which can only be gained through a deep immersion in the culture of engineering, either through formal education, or informal learning. Either way, the result is the adoption of a vast, shared culture, which is global, to a significant degree. It is this shared culture of engineering which makes certain measurable aspects of a program the defining ones. Faster, for example, is always better. While there is a slow food movement, extolling the virtues of traditional cooking over fast food, there is no slow computing movement. Even those subcultures which dedicate themselves to old platforms try to max them out (make them run as fast as possible).

This is not to say that there are no deep disagreements in the programming community that cannot be reconciled by references to objective measurements. There are plenty of them, usually concerning the virtues or vices of particular programming languages, or fundamental questions of software architecture (for example, within the FOSS world, the neverending debate over the monolithic Linux kernel versus the GNU microkernel). However, these differences in opinion are so fundamental that the communities which are built around them can still be large enough to find the critical mass of contributors for interesting projects.

However, the objectifying and solutions-oriented character of a widely shared engineering culture is not the only reason why the assessment of quality in software is not such a quarrelsome problem. At least as important is the fact that the tools/information necessary to assess quality are also widely available. Indeed, software is, at least in some aspects, a self-referential problem. It can be solved by reference to other software and determined within closed environments. A skilled programmer has all the tools to examine someone else’s code on his/her computer. This is still not an easy task – bug fixing is difficult – but since every programmer has all the tools as his/her disposal, it can be made easier by increasing the number of programmers looking at problems. The more people search for the problem, the more likely someone will find it, because, theoretically, each of them could find it. This is what Eric Raymond means when he argues that “given enough eyeballs, all bugs are shallow”. As a result, it is possible to gain a relatively unproblematic consensus about which code is of high quality, and which is not, and, by extension, to establish a hierarchy, or pecking order, among programmers.

This is not so terribly different from the peer-review in science. People look at each other’s work and decide what is good and what is not. The difference lies in what it takes to become a peer. For FOSS, all you need to have are the necessary skills (hard to master, of course, but available to the dedicated) and a standard computer with an internet connection. Not much of a hurdle for those who care. Now, it’s the quality of the code, assessable by everyone, that shows if you are a peer or not. In science what you often need is not just the necessary skills, but often a vast infrastructure (laboratories, machinery, access to archives and libraries, assistants, funding, etc.) to make use of those skills. This expensive infrastructure is usually only accessible to employees of large institutions, and in order to get employed, you need the right credentials. Thus, in science, peers are established by a mixture of credentials and positions. Because without those, you cannot seriously assess the publications of other researchers, for example, by repeating their experiments.

If peer-review is so essential to establish quality control, and yet it’s difficult to establish reliably who’s a peer, the project runs into troubles. The current difficulties of Wikipedia are instructive in this case. Wikipedia is an attempt to create an online encyclopedia, written entirely by users, which can exceed the range and quality of the most reputable traditional reference works. In just five years, hundreds of thousands of articles in dozens of languages have been written, and in quite a few cases, these articles are of very high quality. In terms of modularity and economic structure, Wikipedia is very similar to software development. This is one of the reasons why the open source approach has worked so well. Another reason for its success is that the Wikipedia community has managed to create a widely shared understanding about what a good article should look like (it’s called the ‘neutral point of view’, NPOV).[8] This gives a formal base-line (disputed perspectives on a subject should be presented side-by-side, rather than reconciled) in order to assess articles. However, these criteria are only formal. It says nothing about whether these perspectives are factually correct or in accord with relevant sources.

The basic mechanism of quality control in Wikipedia is the idea that as more people read a particular article mistakes will be found and corrected. So, over time, articles improve in quality, asymptotically reaching the state of the art. Given enough eyeballs, all errors are shallow. However, practice has shown this not to be the case necessarily. It holds more or less true for formal aspects, like spelling and grammar, which can be assessed simply by reading the article. However, in terms of the actual content, this model clearly shows its limits. Often, the actual facts are not easy to come by, and are not available online. Rather, in order to get the fact, you need access to specialized resources that few people have. If such facts are then included and contradict common knowledge, the chances are, that they get corrected as mistakes by people who think they know something about the topic, but whose knowledge is actually shallow. This is less of a problem in very specialized and uncontroversial areas (such as the natural sciences)[9] that are primarily of interest to specialists but a serious problem in areas of more general knowledge. It shows that even for functional works, the addition of more people does not necessarily help to improve the quality – even if these people are well-intentioned – because most of them do not have the necessary information to assess the quality.

Wikipedia is caught in the problem that it does not want to restrict the rights of average users in favour of experts, but, rejecting formal credentials, it does not have a reliable way to assess expertise e.g. the number of entries, or other statistical measures, show devotion, but not expertise. But given the fact that one cannot simply ‘run’ an article to check if it contains a bug, it is impossible to validate the quality of the content of an article simply by reading it carefully. In order to do that, one needs access to the relevant aspects of the external reality and this access is often not available. But because there is no direct way to recognize expertise, Wikipedia is open to all, hoping for safety in numbers. Given the highly modular structure and the factual nature of the project, supported by the NPOV editorial guidelines, the project has thrived tremendously. Paradoxically, the limitation of its method begins only to show after it has become so successful that its claim to supersede other authoritative reference works has to be taken seriously.[10]

Cultural projects, then, face two problems. If they are of an ‘expressive’ type, then the communities that agree on quality standards are so small that collaboration tends to be more club-like than open source. Even if the works are functional, like Wikipedia, the challenge of determining who is an expert without relying on conventional credentials is significant. Currently, the problem is side-stepped by reverting to simplistic egalitarianism, or, as I would call it, undifferentiated openness. Everyone can have a say and the most tenacious survive.

Undifferentiated Openness

The openness in open source is often misunderstood as egalitarian collaboration. However, FOSS is primarily open in the sense that anyone can appropriate the results, and do with them whatever he or she wants (within the legal/normative framework set out by the license). This is what the commons, a shared resource, is about. Free appropriation. Not everyone can contribute. Everyone is free, indeed, to propose a contribution, but the people who run the project are equally free to reject the contribution outright. Open source projects, in their actual organization, are not egalitarian and not everyone is welcome. The core task of managing a commons is to ensure not just the production of resources, but also to prevent its degradation from the addition of low quality material.

Organizationally the key aspects of FOSS projects are that participation is voluntary and – what is often forgotten – that they are tightly structured. Intuitively, this might seem like a contradiction, but in practice it is not. Participation is voluntary in a double sense. On the one hand, people decide for themselves if they want to contribute. Tasks are never assigned, but people volunteer to take responsibility. On the other hand, if contributors are not happy with the project’s development, they can take all the project’s resources (mainly, the source code) and reorganize it differently. Nevertheless, all projects have a leader, or a small group of leaders, who determine the overall direction of the projects and which contributions from the community are included in the next version, and which are rejected. However, because of the doubly voluntary nature, the project leaders need to be very responsive to the community, otherwise the community can easily get rid of them (which is called ‘forking the project’). The leader has no other claim for his (and it seems to be always a man) position than to be of service to the community. Open Source theorist Eric S. Raymond has called this a benevolent dictatorship.[11] More accurately, it is called the result of a voluntary hierarchy in which authority flows from responsibility (rather than from the power to coerce).[12]

Thus, the FOSS world is not a democracy, where everyone has a vote, but a meritocracy, where the proven experts – those who know better than others what they are doing and do it reliably and responsibly – run the show. The hierarchical nature of the organization directly mirrors this meritocracy. The very good programmers end up on top, the untalented ones either drop out voluntarily, or, if they get too distracting, are kicked out. Most often, this is not an acrimonious process, because in coding, it’s relatively easy to recognize expertise, for the reasons mentioned earlier. No fancy degrees are necessary. You can literally be a teenager in a small town in Norway and be recognized as a very talented programmer.[13] Often it’s a good strategy to let other people solve problems more quickly than one could oneself, since usually their definition of the problem and the solution is very similar to one’s own. Thus, accepting the hierarchical nature of such projects is easy. It is usually very transparent and explicit. The project leader is not just a recognized crack, but also has to lead the project in a way that keeps everyone reasonably happy. The hierarchy, voluntary as it may be, creates numerous mechanisms of organizational closure, which allows a project to remain focused and limits the noise/signal ratio of communication to a productive level.

Without an easy way to recognize expertise, it is very hard to build such voluntary hierarchies based on a transparent meritocracy, or other filters that increase focus and manage the balance between welcoming people who can really contribute and keeping out those who do not.
Wikipedia illustrates the difficulties of reaching a certain level of quality on the basis of undifferentiated openness.

‘Expressive’ cultural projects face even greater hurdles, because the assessment of quality is so personal that, on the level of production, collaboration rarely goes beyond a very small group, say a band, or a small collective of writers, such as Wu-Ming.

Open Culture Beyond Open Source

This does not mean that FOSS cannot be taken as a model for open cultural production in other fields. However, what seems to be the really relevant part is not so much the collaborative production aspects, but the freedom of appropriation aspect and the new model of authorship, centering around community involvement rather than individual autonomy. The GPL, and other such licenses, like Creative Commons, are very good instruments to enshrine these basic freedoms. These will create the pool of material in which a new, digital, transformative culture can grow. And indeed we are seeing the emergence of such resource pools. One example is Flickr.com, a rapidly growing repository of images, tagged and searchable, contributed entirely by users. While this is not a commons in a legal sense (the images in Flickr.com remain in the ownership of the author), nor, really, in intention, the fact that the resource as a whole is searchable (through user-defined image tags) does create a de-facto commons. The collaboration here is very limited, restricted to contributing individual works to a shared framework that makes it easily accessible to others. There is no common project, and collaboration between users is minimal, but it still can be understood as ‘open culture’ because it makes the resources of production, the images, widely available. The production of new cultural artefacts remains, as always, in the hands of individuals or small groups, but the material they work with is not only their own inner vision, honed as autonomous creators, but also other people’s work, made available in resource pools.

At this point, this is entirely unspectacular. But by restricting openness to the creation of a pool of relatively basic resource material, rather than complex artistic productions, issues of quality control and the organization of collaboration, with all the necessary difficulties of coordination in the absence of clear markers of quality, are sidestepped. Nevertheless, over time, I think that such de-facto commons can contribute to a slow transformation of culture from a collection of discrete, stable and ownable objects, created by autonomous, possessive individuals, to ongoing adaptations, translations and retellings within relevant contexts. Perhaps out of this, a new sense of authorship will emerge, and new communities in which certain criteria of quality are widely accepted (akin to ‘community standards’). Only once this happens, can, I think, really collaborative modes of artistic production be developed, similar to what we have seen in FOSS.

However, if this happens at all, it will be a very long-term process.

 

•FOOTNOTES

[1]    Thanks to Armin Medosch for comments on a draft version.

[2]    Yochai Benkler, 'Coase’s Penguin, or, Linux and The Nature of the Firm',  Yale Law Journal, No. 112, 2002, http://www.benkler.org/CoasesPenguin.html.

[3]    Unless technically restricted, informational goods are perfectly copyable and distributable for free. This makes them sufficiently distinct from material goods to constitute an ontologically different class of objects, even if the transfer between the two, say printing a digital text on paper, is often not difficult.

[4]    Even for non-fiction books, this has not worked out so far, with the possible exception of educational text books, a genre characterized by the most unimaginative writing.

[5]    http://opensource.org/advocacy/jobs.html

[6]    The classic study still is Angela McRobbie’s British Fashion Design: Rag Trade or Image Industry?, Routledge, London, 1988.

[7]    See my essay ‘One Size Doesn’t Fit All’ in Open Cultures and the Nature of Networks, Futura publikacije, Novi Sad, 2005. http://felix.openflows.org/html/kuda_book.html for an overview of these differences.

[8]    http://en.wikipedia.org/wiki/Wikipedia:Neutral_point_of_view. This issue is independent of the problem of people deliberately inserting false information just for the fun of it (or for more strategic reasons).

[9]    See Nature 438, 15 December 2005, pp 900-901, http://www.nature.com/nature/journal/v438/n7070/full/438900a.html.

[10]    Wikipedia co-founder Larry Sangers thinks that these limitations are so dramatic that he is preparing, with the help of $10 million funding, to start another free reference work, Digital Universe, but this time edited, or at least supervised, by experts. See http://www.theregister.co.uk/2005/12/19/sanger_onlinepedia_with_experts/

[11]    Eric S. Raymond, 'The Cathedral and the Bazaar' in First Monday Volume 3, No. 3, 1988. http://www.firstmonday.dk/issues/issue3_3/raymond/ (all further quotes of Raymond are from the same article, unless otherwise noted).

[12]    For the best analysis of the governance systems of FOSS projects, see Steven Weber, The Success of Open Source, Harvard University Press, Cambridge, MA, 2004.

[13]    Jon Johanson, who gained international fame as the person who wrote the code to crack the DRM system on DVDs, and many others subsequently, lived at the time in Harstad, Norway

Published in: Media Mutandis: a NODE.London Reader, (March 2006)