Abstract: This article tries to assess how people and their actions are made visible within and through computer networks. The aim is to differentiate between different modes of visibility based on the politics they support. The main difference is made between vertical visibility (i.e. the network provider sees verything) and horizontal visbility (i.e. people see each other without anyone seeing everything).
Politics of Networked Visibility
In January 2001, Franco and Eva Mattes, the artist couple by then still known as 0100101110101101.ORG, started their project “life sharing”. As they explained in the concept note: “every Internet user has free 24-7 access to [our] main computer: read texts, see images, download software, check 01's private mail, get lost in this huge data maze. ... Contents are not being periodically uploaded the way people build and maintain websites, because 0100101110101101.ORG works directly on the shared computer. The home computer has been turned into a transparent webserver, therefore users can watch in real time the "live" evolution of the work.”1 The project ran for more than 2 years without interruption.
As an entry to the politics of networked visibility the project is interesting for several reasons. First, the project was tracing closely a general shift – following the bursting of the dotcom bubble – from emphasizing “web-design”, that is, surfaces and the visual interfaces to raw data, the material from which visibility itself is generated.2 Second, it abolished the distinction between public and private through a explicit and radical act (“privacy is stupid.”). Again, this traced a general development accelerated by the then emerging social media. In these systems, the traces of our activities are made visible others, partly explicitly through “posting” of data in known contexts, partly implicitly through aggregation, at he very least by the providers of these systems themselves, without knowledge and consent by the individual users. Third, the projects played with the tension between visibility as control – we can see everything they do – and cooperation -- rendering themselves available to people who share their interests. Seen from today, the question of infrastructure – sharing data on their own server – is of interest as well. At the time, it was a simple necessity as there simply were no network cloud servers that could have been used.
In this text, I will address the character of the data, two types of visibility created from it, the main social effects of these visibilities – changing patterns of cooperation and control – and finally, the question of infrastructure.
First, some explanation of what I understand by networked visibility. It is created by the capacity to record, store, transmit, access communication, action and states (e.g. ways of being, such as being off-line) generated through digital networks. The data thus produced as several features. First, it is durable, thus it extends across time. In other words, once recorded, the data can be available for a very long time. Second, the data is highly transmittable, thus it extends across space. Data generated at one place is easily available at an other. There are virtually no geographic boundaries. Third, the records have no scale limit, they can span a very large number of dimensions of social life and do so in great detail. In terms of analysis and creating visibilities from this data, there is no longer a difference between the focusing on large groups and on individual persons. The data can easily be aggregated (building groups) and de-aggregated (extracting individuals) as it is in the interested of those who have access to the masses of data. The current limits are not related to scale, but to the ability to automatically translate action and states into records, and the ability to render visibile people's actions that produce the near infinite ocean of raw data. But these limits are constantly being pushed outwards.
Individual people and groups are becoming visible in ways very different from what one might call “traditional social visibility”, created by people knowing each other through direct or indirect encounters (being friend-of-a-friend in the old sense) as well as from what one might call “bureaucratic visibility” produced via paper record. In the first case, the information is highly detailed, but not standardized and degrades quickly, both in time and space, ie. people forget and as information is beyond told and retold, it changes significantly. In addition, it is highly contextualized. It cannot easily be extracted and/or aggregated. In the latter case, the records are stored in archives and filing cabinets, or centralized, isolated databases into which analog data is entered. While that makes them highly standardized (though not necessarily accurate) and enduring, they still cannot not be easily aggregated and limits of scale are very real, both in the sense of the ability to records people actions and states, as well as the ability to make use of the data collected.
Driving the ever more encompassing and piercing forms of networked visibility are both technical and social factors. On the one hand, computers produce incessantly records of their own states and actions. It is these record that make linear development possible, that is, the computer can remember its previous states and make use of these states as it continues its processes. This is such a fundamental aspect of how computer work (read, process, write) that extra procedures needed to be developed to get rid of this data (mainly by overwriting it with other, meaningless data). But these procedures are surprisingly complicated, save of physically destroying the recording devices, computer forensics can retrieve almost always as it some portion of the “deleted” records. Of course, this is not an issue of technological determinism. Contemporary networked systems are designed to take advantage of these technological base-functions in particular ways. The main driver for the production of a specific visibility can be an entirely political one, as was in the case of new regulations concerning data retention in the telecom sector. Here it was clearly a political will, enacted by the EU through a directive in March 2006 and subsequently enshrined in national law throughout the EU, that forced the technological capacity to record and store connection data to be realized in a particular manner regarding what is recorded, how long stored and the rules of access by law enforcement agencies. The telecom providers initially opposed this directive, because records mandated by the law were of no use to them (they already had all the data they needed). However, they have to pay for the necessary infrastructure investments (which, of course, they passed on to their users, so, in effect, Europeans are paying for your own surveillance every time they use a telephone).
Thus, networked visibility is the outcome of a technological potential, realized by a wide range of actors to further particular agendas, be they social, commercial, or political. As an effect, an ever an ever growing number of our actions and states are becoming visible across time and space. The character if this visibility is governed by a wide range of policies determining who can see what under which circumstances. On one end of the scale is data – for example in blogs or in publicly accessible databases – that is available to everyone and all the time. On the other end of the scale is data (and visibility that can be constructed from it) available to a very small group under tightly controlled circumstances, such as when the law enforcement agencies need a written order issued by a judicial officer to access connection data retained by the telecom providers. In between these two extremes, there are infinite number of informal practices and formal policies, from people sharing information within a semi-closed networks of their friends to infrastructure providers having full access to the log-files of everything that goes on within their systems.
Visibility and sociality
Here is my thesis: visibility – the ability to see, or get to know of, other people – is an essential precondition for social actions and the types of visibility produced shapes the social actions possible. There sequence of visibility and action is open. The desire to act can be first, and then virility is created to make such actions possible (think again of data retention policies), or the visibility created a desire to act (think of posting a comment to an interesting blog post by an previously unknown person). Yet, action always requires visibility. For the sake of simplicity, I want to distinguish two types of visibility -- “horizontal visibility” and “vertical visibility” -- and the different social dynamics they can produce.
By horizontal visibility I understand people becoming visible to each other. Person A can see person B and vice-versa. This is the visibility of social networks when people become “friends” getting thus access to each others profiles, or of open forums where people can read everyone's comments. This type of visibility is the precondition for cooperation, because by seeing each other, people can develop a sense of trust. When speaking of cooperation, it's important to remember that cooperation is a mode of organization – working together (voluntarily) for a common goal – that can be applied to all kinds of goals, desirable and well as undesirable ones. By becoming visible to each other people can assess the chances of having or developing a common goal and based on their shared interest risk to trust each others. Initially, perhaps only in trivial forms, but over time, as visibility expands, and trust as well as cooperation can deepen. In the absence of a formal organization which can guarantee certain accountability, trust is an essential element of cooperation.
The horizontal visibility in digital networks is very conducive to fostering trust between individuals. For one, people are usually present as individuals (rather than in formal roles), thus creating a continuity over time and contexts. In other words, people are present with the stories of their individual life, even if only in a fragmentary way. But often, a few fragments of a person's history are enough to begin cooperation based on trust, because the cooperation itself is often “weak”, that is, it is limited in scope, limited in risk and limited in commitment. It's quickly established and given up again, if it turns out to be less rewarding than anticipated. Limited acts of (very) weak cooperation in this sense, are, for example, tagging one's images on flickr so that they can be found, editing a page on Wikipedia, answering an email of person one does not know, posting to a forum or subscribing to a Facebook campaign. The weaker the cooperation, the lower is the hurdle to engage in it. The hurdle is often so low, that it's easier to simply assume cooperative behavior and discontinue it if the feedback is negative, than to invest significant resources to evaluate the basis of trust beforehand. The stronger and more substantive the cooperation, the more people need to know about each other. But getting to know is relatively easy, because visibility makes it easy to find people with shared interests and assess their commitment to this interest based on their previous history. The practice of weak cooperation is the first act in a series of exchanges that can extend trust and thus enable to enter into more substantial forms of cooperation. We can observe this pattern in virtually all large online cooperation projects. Take Wikipedia as a well-known example. To edit a single entry, one can remain anonymous. This is weak cooperation based on weak trust. But if one changes numerous articles, one starts to become known with one's history to other members of the projects who care about such things. If the entries are of high value over time, the contributors achieves a certain reputation which will allow him or her to enter into deeper forms of cooperation at the heart of the Wikipedia projects (e.g. become an administrator). Thus, as one invests more of one time and resources into a project, the more one can move from the periphery to the center. Cooperation turns onto collective action. Such a seamless transition from weak to strong forms of cooperation is a very common process in such projects.
Thus trust created by horizontal visibility and the ease of weak cooperation – in combination with strong cooperation at the core of the project – help to explain the growth of new voluntary associations. Structurally, they are anarchist, in the sense that they are voluntary, based on mutual trust, characterized by an absence of means of coercion and oriented towards producing common resources or collective action. Politically, of course, they are all over the map.
However, there are also downsides to this. If networked visibility becomes the precondition for trust, then being invisible becomes suspicious. This creates social pressure to become and stay visible all the time. Updating one's Facebook profile, one blog and twitter stream becomes a social imperative. To explore this downsid is beyond the scope of this text. The ability to engage easily with one another, on a voluntary basis, is a decentralizing and democratizing factor, even if voluntary cooperation is not always positive in terms of the goals and creates new forms of social pressure.
Yet, even as many web2.0 systems create horizontal visibility, they are constructed as a trade-off, since they are also set up to create vertical visibility. By that, I mean that a few people can see many people without being seen by them. The visibility is strictly one way. The provider of the networked infrastructure, say Google or Facebook, can see everything that happens on their systems, no matter what kind of privacy policies individual users have adopted. Because of the particular nature of the digital data, infrastructure providers cannot only see what everyone is doing, they can also interpret their data in numerous ways, thus rendering visible things that barely existed before (for example, emergent group behavior). What is becoming visible to them is the composition and transformation of society in real time, on any level of aggregation. It's as possible to observe individual people as it is to see very large groups. Real time knowledge can be easily and seamlessly combined with historical data, thus creating crude or not-so-crude forms of predictions. What is being created through vertical visibility is proprietary, governmental knowledge of an unprecedented quality. An example. In November 2008, Google launched a project called Flu Trends. As they explain:
“Our team found that certain aggregated search queries tend to be very common during flu season each year. We compared these aggregated queries against data provided by the U.S. Centers for Disease Control and Prevention (CDC), and we found that there's a very close relationship between the frequency of these search queries and the number of people who are experiencing flu-like symptoms each week. ... The CDC does a great job of surveying real doctors and patients to accurately track the flu, so why bother with estimates from aggregated search queries? It turns out that traditional flu surveillance systems take 1-2 weeks to collect and release surveillance data, but Google search queries can be automatically counted very quickly. By making our flu estimates available each day, Google Flu Trends may provide an early-warning system for outbreaks of influenza. .... For epidemiologists, this is an exciting development, because early detection of a disease outbreak can reduce the number of people affected.”3
The CDC is, indeed, not a bumbling bureaucracy but one of the most sophisticated data gathering and analysis operations in the world. They have a very large network – down to the general practitioners who have to report patients with infectious diseases – and very efficient ways to process this data. Still, Google is 1-2 weeks faster in detecting trends. And what can be done for tracking the flu can be done with any collective phenomenon. In other words, Google, or any other similarly sophisticated provider, can detect emerging social trends weeks possibly month ahead of anyone else, even of the very people who make up the common trend but might still see themselves as isolated and exceptional. They can see collective behavior before the people become aware of their own collectivity. It is a powerful moment to intervene in social dynamics, either to advance or to block them, because the intervention is virtually undetectable. How can we know why something that never happened? This type of vertical visibility creates new centers of networked power and control that are, so far, outside any democratic control.
The question of infrastructure
The construction of visibility is a key dimension in setting the framework for social interaction. Typically, horizontal visibility enables voluntary cooperation and social decentralization, whereas vertical visibility is a technique to create new centers of influence and power. In such a situation what we need is a politics of visibility that can differentiate between the two forms and develop was of limiting the vertical one. Which brings us back to the question of infrastructure. In the first phase of mass internet culture, the 1990s, it was understood that one need to run one's own server. There was simply no other way. The life sharing project mentioned in the beginning of this article was part of this early culture. The artists were running their own server which they chose to open to the public. Yet, they still retained control over it in the sense that it was their server which – if need be – they could decide to turn off and they could see what went on there. With the advent of social media, running one's own server has become increasingly complex and ever more providers have been offering their services in the cloud, which are easier to use, more sophisticated and, often, more reliable that self-managed servers. After all, running a 24/7 infrastructure is very difficult and demanding and there are good reasons why this has to be done by professionals. Thus, over the last couple of years, questions of infrastructure nearly disappeared from the discussion as growth of horizontal visibility and new forms of cooperation where achieved at the price of expanding also forms of vertical visibility and allowing the establishment of new centers of networked power. For a while, this price seemed negligible.
Recently, however, the two forms of visibility have to begun to come into conflict with one another. Google's motto “don't be evil” is not enough to protect the public and Facebook is more than a handy tool offered for free by the world's youngest billionaire. Yet, these infrastructures have become indispensable and it is no longer possible to simply log off. They are essential to the way we live today, not the least because voluntary cooperation is so powerful, as the recent events in the Arab world demonstrated.
Yet, much of the public debate around the conflict between the potential to cooperate and the potential for ubiquitous and deep control is framed in the unproductive terms of privacy and individual choice. However, there is also a renewed interest in the possibilities to revert the trend towards centralization and vertical visibility. Among those focusing again on infrastructure, one projects stands out in terms of clarity of vision and ambition, the Freedom Box Foundation, created in early 2011 by Eben Moglen. As they explain:
Because social networking and digital communications technologies are now critical to people fighting to make freedom in their societies or simply trying to preserve their privacy where the Web and other parts of the Net are intensively surveilled by profit-seekers and government agencies. Because smartphones, mobile tablets, and other common forms of consumer electronics are being built as "platforms" to control their users and monitor their activity.
Freedom Box exists to counter these unfree "platform" technologies that threaten political freedom. Freedom Box exists to provide people with privacy-respecting technology alternatives in normal times, and to offer ways to collaborate safely and securely with others in building social networks of protest, demonstration, and mobilization for political change in the not-so-normal times.
Freedom Box software is built to run on hardware that already exists, and will soon become much more widely available and much more inexpensive. "Plug servers" and other compact devices are going to become ubiquitous in the next few years, serving as "media centers," "communications centers," "wireless routers," and many other familiar and not-so-familiar roles in office and home.
Freedom Box software images will turn all sorts of such devices into privacy appliances. Taken together, these appliances will afford people around the world options for communicating, publishing, and collaborating that will resist state intervention or disruption. People owning these appliances will be able to restore anonymity in the Net, despite efforts of despotic regimes to keep track of who reads what and who communicates with whom.4
The project is in the very early stages, but the timing is very apt. The attempt of the Egyptian government to turn off the Internet has shown drastically how vulnerable centralized infrastructures are, even to very clumsy attempts to control them. One can assume that other governments, and of course the services providers themselves, are much less clumsy in their attempts to exercise control, be it for political or commercial reasons. The recent events have also shown the power of horizontal networking in 'not-so-normal times'. The Freedom Box Project calls out the false trade-off between two visibilities offered by the commercial providers and demonstrating that it is possible to construct infrastructures that enable horizontal visibility without falling into the trap of vertical one. Apparently, the public's desire for such infrastructure is strong. Within a few weeks, the project raised in an open call more than $ 70'000 giving the project a good chance to grow to a self sustainable level. Perhaps it offer a change to escape the trap of web2.0 without sacrificing its undeniable powers. It would transform the social character of the networks privileging the horizontal over the vertical.
Source: NETWORKS AND SUSTAINABILITY (Acoustic Space No. 10)
Peer-reviewed Journal for Transdisciplinary Research on Art, Science,
Technology and Society, Edited by Rasa Smite, Armin Medosch, Raitis Smits, (pp.13-19)
 Grzinic, Marina (2001). A Hole in the Brain of the Machine. http://www.0100101110101101.org/home/life_sharing/essay.html
 http://www.freedomboxfoundation.org/ (as of Feb. 2011)