Personal Web searching in the age of semantic capitalism: Diagnosing the mechanisms of personalisation

I'm very happy, our new paper on the personalization of search results is out now. To our knowledge, it's the first to do empirical research in a systematic way on how personalized results actually differ from non-personalized results and interpret the results within a critical framework.

First Monday > Volume 16, Number 2 - 7 February 2011
Personal Web searching in the age of semantic capitalism: Diagnosing the mechanisms of personalisation.
Martin Feuz, Matthew Fuller, Felix Stalder


Zwei aktuelle Interviews ...

... beide zur Politik der Suchmaschinen, einmal in Die Presse "Google ist nicht das Rote Kreuz", (11.05.2010) und einmal in ORF Futurezone "Die Herrschaft der Suchmaschinen." (08.05.2010)

3 Aktuelle Podcasts

Zwei Interviews und eine Buchbesprechung

Wie das Web 2.0 «Öffentlichkeit» neu definiert
Mit Youtube, Facebook, Google und Co. verändert sich der Begriff der Öffentlichkeit rasant. Der angesehene Soziologe Manuel Castells hat mit seiner Studie «Communication Power» eine umfassende Analyse der Veränderung der Kommunikation und der medialen Öffentlichkeit durch das Web 2.0 vorgelegt. Der Medientheoretiker Felix Stalder, der selber zu den Netzpionieren gehört, kennt Castells Thesen - und diskutiert mit Barbara Basting darüber. Reflexe vom Fr, 9.4.2010, 28. Min.


“Deep Search” Politik des Suchens jenseits von Google
Die digitale Explosion konfrontiert uns seit Jahren mit einem regelrechten Daten-Tsunami. Die Suchmaschinen sind es, die uns helfen, diesen Tsunami zu beherrschen. Das Verheerende: Was wir über die Welt wissen, erfahren wir fast immer durch Google. Anders gesagt: Was Google nicht findet, existiert für viele Menschen nicht. Mit dieser Situation und ihren Implikationen beschöftigen sich Medientheoretiker, Kulturwissenschaftler, Soziologen und Politologen im Sammelband “Deep Search – Politik des Suchens jenseits von Google”. Philipp Albers hat das Buch gelesen. 26. März 2010, 10 Min.

Wenn man diese Besprechung mit der Renzension der FAZ vergleicht, muss man sich fragen, wo heute wirklich die Qualitätsmedien liegen.

The Second Index. Search Engines, Personalization and Surveillance (Deep Search)


Google’s well-known mission is “to organize the world’s information”. It is, however, impossible to organize the world’s information without an operating model of the world. Melvil(le) Dewey (1851-1931), working at the height of Western colonial power, could simply take the Victorian world view as the basis for a universal classification system, which, for example, put all “religions other than Christianity” into a single category (no. 290). Such a biased classification scheme, for all its ongoing usefulness in libraries, cannot work in the irreducibly multi-cultural world of global communication. In fact, no uniform classification scheme can work, given the impossibility of agreeing on a single cultural framework through which to define the categories.2 This, in addition to the scaling issues, is the reason why internet directories, as pioneered by Yahoo! and the Open Directory Project (demoz)3, broke down after a short-lived period of success.

Search engines side-step this problem by flexibly reorganizing the index in relation to each query and using the self-referential method of link analysis to construct the ranking of the query list (see Katja Mayer in this volume). This ranking is said to be objective, reflecting the actual topology of the network that emerges unplanned through collective action. Knowing this topology, search engines favor link-rich nodes over link-poor outliers. This objectivity is one of the core elements of search engines, since it both scales well and increases the users’ trust in the system.

Deep Search: Introduction


It’s hard to avoid search engines these days. They have established themselves as essential tools for navigating the dynamic, expanding informational landscape that is the Internet. The Internet’s distributed structure doesn’t allow for any centralized index or catalog; consequently, search engines fulfill a central task of making digital information accessible and thus usable. It’s hard to imagine our lives without their ever-expanding digital dimension; and so it’s increasingly hard to imagine life without search engines. This book looks at the long history of the struggle to impose order on the always-fragile information universe, addresses key social and political issues raised by today’s search engines, and envisions approaches that would break with the current paradigms they impose.

Deep Search: The Politics of Search Beyond Google

I'm extremely proud that our book on the politics of search engines is out now. I know I'm totally biased, but I really think this is a great collection of essays. And it's available in English as well as in German. Two separate books, each containing original and translated texts.

Konrad Becker/Felix Stalder (eds.). Deep Search. The Politics of Search beyond Google. Studienverlag. Distributed by Transaction Publishers, New Jersey 2009 pp. 214, ISBN 978-3706547956

“This collection gets to the heart of the most important issues concerning our global information ecosystem: Will the ‘soft power’ of one, two, or three corporations exert inordinate yet undetectable influence over what we consider important, beautiful, or true? What are the possibilities for resistance? What are the proper avenues for law, policy, and personal choices? This book walks us through these challenges like no other before it.”

Siva Vaidhyanathan, University of Virginia, author of The Googlization of Everything

youtube's failure to generate substantial income

The register has a story on an analyst's estimate (whatever that's worth these days) that youtube will be losing close to half a billion $ this year. They take this as an indication that the ad model is not working. After discussing two reasons why this might be the case -- either Google doesn't know how to do it (unlikely) or the model is fundamentally broken (more likely) -- they come up with an option for Google to make money out of youtube.

Of course, there's a third option for YouTube. Its parent company - whoever that may be - may want to cross-subsidize the operation in the hope that will drive traffic elsewhere on the site. Don't laugh - that's exactly what Google's new music service in China does. Google China pays rightsholders much more than 0.22p per song - about ten times as much, according to industry estimates. As Baidu has shown, music drives enormous traffic to the rest of the operation.

See also Ars Technica's article on the same subject.

Update (14.04.): On the other hand, artists are demanding that Youtube increases it's payment to them.

Update II (15.04). A detailed breakdown of revenue and costs. The most interesting figure is the amount given to independent creators through it's revenue sharing program.

Revenue share: If you provide videos to Google and join its revenue sharing program, then you get a commission if ads are shown alongside your content. Credit Suisse estimates that YouTube will "share" away $24 million this year -- $66,000 per day.

IRIE – International Review of Information Ethics: Search Engines

Nothing new, in fact, already more than 3 years old, but still worth noting. All essays are available online.

The third edition of the ‘IRIE – International Review of Information Ethics’ (06/2005) and the first under its new title after having been renamed from IJIE (due to a name similarity with another infoethics journal) is dedicated to the focal subject “Search Engines”.

In his essay “Funktionen, Probleme und Regulierung von Suchmaschinen im Internet (Function, Problems, and Regulation of Search Engines in the Internet – an extended abstract in English is enclosed)”, Christoph Neuberger reports on this debate in Germany as well as on the most recent results of the communication sciences. Furthermore, we publish an English translation of the “Code of Conduct” which also was developed in the context of the already mentioned research project. Important aspects like “Ethical and Political Issues in Search Engines” (Hinman), the necessity of the “Symmetry in Confidence” in search engines (Rieder), search engines and their relation to the “Ethical subject” (Blanke) and finally the “Problem of Privacy in Public” (Tavani) are treated by these four English contributions.

The issue is supplemented by two articles that do not fall under the focus of ‘search engines’ but complement it in one or the other way. Thomas Hoeren argues in ‘Laws, Ethics and Electronic Commerce’ that the Internet is leading to a dematerialization, deterritorialization, extemporalisation and depersonalisation of law and thereby the legal system loses its traditional (Roman law) roots (person, space, time). Secondly, the ‘Attitudes of UK Librarians and Librarianship Students to Ethical Issues’ have been empirically examined by Kevin Ball and Charles Oppenheim.

Google search logs as real time monitoring tool

Google claims that it can detect the outbreak of the common flu two weeks earlier than the US Center for Desease Control, based on sudden spikes in relevant search terms -- e.g. flu systems, muscale ache -- that people are using. The set up a site to track this called "Flu Trends." They write

We have found a close relationship between how many people search for flu-related topics and how many people actually have flu symptoms. Of course, not every person who searches for "flu" is actually sick, but a pattern emerges when all the flu-related search queries from each state and region are added together. We compared our query counts with data from a surveillance system managed by the U.S. Centers for Disease Control and Prevention (CDC) and discovered that some search queries tend to be popular exactly when flu season is happening. By counting how often we see these search queries, we can estimate how much flu is circulating in various regions of the United States.

It's pretty clear that while we are learning about the world through Google, google is learning about us. And it does so in real time. While we have to wait for someone to put material out there, Google as access to enormous amounts of raw data as it is being produced and can process it any way it wants, most profitably, one can imagine, for marketing. I guess the pharmaceutical industry is quite interesting in such data. One can imagine that publishing (or witholding) such real time monitoring data is having a real effect in the developing of the underlying phenomena itself.

Syndicate content