felix's blog

Google search logs as real time monitoring tool

Google claims that it can detect the outbreak of the common flu two weeks earlier than the US Center for Desease Control, based on sudden spikes in relevant search terms -- e.g. flu systems, muscale ache -- that people are using. The set up a site to track this called "Flu Trends." They write

We have found a close relationship between how many people search for flu-related topics and how many people actually have flu symptoms. Of course, not every person who searches for "flu" is actually sick, but a pattern emerges when all the flu-related search queries from each state and region are added together. We compared our query counts with data from a surveillance system managed by the U.S. Centers for Disease Control and Prevention (CDC) and discovered that some search queries tend to be popular exactly when flu season is happening. By counting how often we see these search queries, we can estimate how much flu is circulating in various regions of the United States.

It's pretty clear that while we are learning about the world through Google, google is learning about us. And it does so in real time. While we have to wait for someone to put material out there, Google as access to enormous amounts of raw data as it is being produced and can process it any way it wants, most profitably, one can imagine, for marketing. I guess the pharmaceutical industry is quite interesting in such data. One can imagine that publishing (or witholding) such real time monitoring data is having a real effect in the developing of the underlying phenomena itself.

les incoherents

La Mona Lisa fumant une pipeI'm doing research on the early practices of remixing and came across this gem from the late 19th century, by Eugène Bataille a member of an art group called "les incoherènts" which I had never heard of, quite frankly. Yet they did many of the things that later the dadaist and surealists would do, a full generation earlier.

How search engines organize the world's information

Today appeared on the ORF website appeared an interview with Konrad Becker and myself on social and political aspects of search engines. This is all part of the preparation leading up to our Deep Search conference on Nov. 8.

Die Konferenz "Deep Search", die am Samstag in Wien stattfindet, untersucht die gesellschaftliche Macht der Suchmaschinen. ORF.at hat mit den Konferenzveranstaltern Konrad Becker und Felix Stalder vom World Information Institute über digitale Wissensordnungen, Bürgerrechte bei Google und Paranoia als Erkenntnisinstrument gesprochen.

Visualization of remix Culture

Giorgos Cheliotis, assistant professor of Communications and New Media at the National University of Singapore done one of the, if not the, first network analysis and network visualization of a remix community, based on the ccMixter.

He writes:

One of the visualizations, consisting of all uploaded audio tracks that have been remixed and all remixes thereof, is shown below. I was very surprised by the structure, density and connectedness of the resulting network. I was expecting to see a more weakly connected set of “islands of common interest”, as defined by genre, friendships or location. Instead, before we even go into deeper analysis, the figure suggests that the creative reuse of cultural content (such as enabled by licenses like Creative Commons) leads to a very high degree of cross-pollination across authors and across works, forming a dense network of greatly enhanced collaboration and creativity through open sharing and reuse. We have posted a working paper and more cool hi-res visuals on the Participatory Media Lab wiki.

This seems to suggest that cultural -- or at least musical -- styles are becoming ever more fluid as the range of source is becoming ever more wide.

Google and Authors Guild agree on settlement

Seems like Google is quickly moving to settle major copyright issues, first in respect to youtube, now with in the case of Google Book Search. On October 28, 2008 the Authors Guild, the Association of American Publishers and Google announced the landmark settlement of Authors Guild v. Google. This is a major deal, both in terms of providing a new economic infra-structure based on the abundance of information, rather than its scarcity, but also, worryingly, in further cementing Google's central position as provider of the world's information. It's clearly another move in centralizing basic services on the Internet.

As Google explains the settlement:

What does the settlement provide?

If approved, the Settlement will authorize Google to continue to scan in-copyright Books and Inserts; to develop an electronic Books database; to sell subscriptions to the Books database to schools, corporations and other institutions; to sell individual Books to consumers; and to place advertisements next to pages of Books. Google will pay Rightsholders, through a Book Rights Registry (“Registry”), 63% of all revenues earned from these uses, and the Registry will distribute those revenues to the Rightsholders of the Books and Inserts who register with the Registry. Distribution will be made pursuant to an Author-Publisher Agreement and a Plan of Allocation, which are part of the Settlement Agreement.

The proposed Settlement also will authorize Google to provide public and higher education libraries with free access to the Books database. Certain libraries that are providing Books to Google for scanning are authorized to make limited "non-display uses" of the Books. Those uses are described in the Notice and the Settlement Agreement.

Linux Foundation: the kernel is worth $1.4 billion

Ars Technica writes

The Linux Foundation has published the results of a study that the organization conducted in order to compute the approximate financial value of the Linux platform. Based on the results of the study, the Linux Foundation has concluded that it would cost $1.4 billion to develop the Linux kernel from scratch and $10.8 billion to develop the complete platform stack.

Even though this is just a very rough estimate (based on the volume of code) it's still a handy figure. The full study (which includes a discussion of the limitations of its methodology) can be found here.

Clay Shirky responds to my review

A few weeks ago, I published a review of Clay Shirky's new book "Here comes Everybody" on Metamute.

In the mean time, Simon Collister was able to ask Shirky about my review, where I criticized him for talking only about non-controversial issues and omitting major questions such as copyright and business models / profiling / privacy.

In his response, Shirky focused only on the question of copyright, claiming, strangely, that while he had written a lot about in the past it is was not a real issue in the big picture. Here is what he had to say:

local copy (12MB)

The Featured Artists' Coalition

The Featured Artists' Coalition campaigns for the protection of performers' and musicians' rights. We want all artists to have more control of their music and a much fairer share of the profits it generates in the digital age. We speak with one voice to help artists strike a new bargain with record companies, digital distributors and others, and are campaigning for specific changes.

This coalition includes some of the best-selling acts in British Music Business and is yet another sign that the business model of the record label has become unacceptable, even for the those for whom it works relatively well. As the labels's exclusive control over on distribution has vanished, their ability to dictate the terms been reduced as well. Really, the only reason they still matter is the monopolistic control over our musical history, i.e. the back catalogue.

The Coalition will begin by focusing on six areas where it is seeking change:

1. An agreement by the music industry that artists should receive fair compensation whenever their business partners receive an economic return from the exploitation of the artists’ work.

2. All transfers of copyright should be by license rather than by assignment, and limited to 35 years.

3.The making available right should be monetized on behalf of featured artistes and all other performers.

4. Copyright owners to be obliged to follow a ‘use it or lose it’ approach to the copyrights they control.

5. The rights for performers should be the same as those for authors (songwriters, lyricists and composers).

6. A change to UK copyright law which will end the commercial exploitation of unlicensed music purporting to be used in conjunction with ‘critical reviews’.

Automate everything: NewsBots and TradeBots

The Wall Street Journals runs a story on how the Google News algorithm mis-read an old story (from 2002) about UAL's financial problems as a breaking news, posted it, thus triggering other algorithms which sell stocks based on news stories.

Google traces the appearance of the 2002 article in its search engine to a process that began late last Saturday night. At 10:36 p.m. PDT, Google's "crawler" -- the technology that finds Web pages -- discovered a new link on the Web site of Tribune's South Florida Sun-Sentinel newspaper in a section called "Popular Stories: Business." The article -- which didn't carry a date but was published by the Chicago Tribune in December 2002 -- hadn't appeared there when Google's crawler last visited the page at 10:17 p.m., the company said.
From the Sun-Sentinel site, the article became available through Google News service, accessible if a user searched for keywords like "United Airlines." The article didn't appear in any of the headlines on Google News's home page, but it was picked up and sent via email to people who had created a custom Google News alert about UAL or related topics.The stock market opened Monday with no drop in UAL shares, but the UAL story began circulating widely via a posting by research firm Income Securities Advisors Inc. that was made available to users of Bloomberg L.P., the financial-news service widely watched on Wall Street. Shortly after a headline from the outdated report flashed across Bloomberg screens at about 10:45 a.m., UAL shares began a precipitous drop. Over the next 15 minutes, before Nasdaq halted trading, they dropped as low as $3.

It's not the first time erroneous news reports have swung stock prices, but the increasing reliance on Google, Yahoo and other news aggregators ratchets up the speed with which information -- correct or incorrect -- can spread across the globe.

Syndicate content