Skip Navigation

The common good

August 20, 2004 By Philip Ball This article courtesy of Nature News.

Projects such as SETI@home have proved that big research does not need big resources. Can lab-based research follow suit? Strike the right chord and you'll get an army of volunteers helping you for free, says Philip Ball.

Please log in to rate this page.

View Comments

Can you get thousands of people to work for you, generating a high quality product, without paying them? Conventional economics would answer: don't be silly.

But it's possible... and it's happening. Yochai Benkler, an expert on information and intellectual property at Yale Law School, Connecticut, thinks it might even point to a good new way to do science1.

The trick is simple enough. The work must not, in fact, be for 'you' but for 'the public good'; there should be no top dog who rakes in profits, financial or otherwise. The corollary is that the fruits of all this labour must be freely available. If, furthermore, the goal is a worthy one, then people will flock to offer their time and effort for free.

Take Project Gutenberg, which is creating an ever-growing, searchable online library of copyright-free books. So far, 12,000 books have been painstakingly scanned (using optical character recognition) and checked by an army of volunteers. There is no glory in it for anyone: the volunteers are anonymous and the key task of proof-reading is mind-numbingly mundane. But still they do it (and I've had many occasions to be grateful for that). Any mistakes get cleared up by the same mechanism that debugs open-source software such as Linux.

Building community

It was this kind of public spirit that, according to legend, was instrumental in building the classic Gothic cathedrals. Contemporary accounts from the twelfth and thirteenth centuries tell of peasants and nobles alike shackling themselves to carts in order to carry stones to the construction sites, singing psalms in an ecstasy of religious fervour.

The truth is that most of the hard work was actually done by paid professionals, but nonetheless the cathedrals could not have arisen without the collaboration of the whole society, which believed helpers would literally get their rewards in heaven.

And that is fine for low-skill tasks like carting stones and checking spellings. But can this commons-based production, as Benkler calls it, really work for science? There is already good evidence that it can.

In 2000, NASA launched a project called Clickworkers. This called for volunteers to identify and classify craters in online images of the surface of Mars taken by the Mars Orbiter Camera. Users took a short online course that trained them in making classifications. NASA concluded after a year-long pilot study that "the automatically computed consensus of a large number of clickworkers is virtually indistinguishable from the inputs of a geologist with years of experience in identifying Mars craters".

How did the volunteers gain such expertise? Well, they didn't: the accuracy comes from the sheer volume of contributions. The collective ability of crowds to find the right answer to a problem that would severely tax an individual is explored in James Surowiecki's recent book, The Wisdom of Crowds (Random House, 2004). The classic example is the fairground challenge to guess the number of jelly beans in a jar, or the weight of a cow: the group's average guess is always better than the vast majority of individual estimates.

The pharmaceutical company Eli Lilly is exploring the use of this kind of group wisdom, modelled as an internal stock market, to harness the skills of its employees in predicting which drug candidates are likely to gain approval by the US Food and Drug Administration. And turning to another large organization, Surowiecki suggests that the Columbia space shuttle disaster resulted from NASA's "failure to tap into knowledge and information that the people in the organization actually had".

Public gaze

Could mass contributions provide the long-sought alternative to scientific peer review? The idea of allowing anyone, rather than just 'experts', to comment on a paper submitted for publication might sound horrifying. But everyone knows how idiosyncratic the present system can be. In contrast, blog-style websites such as Slashdot and Kuro5hin have a much more democratic approach to reviewing the quality of submitted articles.

The details vary but, in essence, contributions to both sites are assessed by many other users through a rating system that proves very effective at sifting out nonsense. "Noise is not tolerated," Kuro5hin proclaims proudly. The bookseller Amazon operates a similar system in providing readers' reviews of books, and it is hard for unscrupulous authors to manipulate the system with self-reviews (although some admit to having tried).

The physics preprint server arXiv goes even further in having no peer-review system for papers posted on the site. This does mean that it sometimes hosts marginal contributions, but the facility for post-publication criticism, along with the importance of maintaining a reputation among peers, ensures that arXiv sustains a remarkably high quality.

Benkler thinks that this kind of 'peer production' can do more than provide quality control for scientific publications. He thinks that actual research can be conducted in this way too. Commons-based and peer production are especially well suited to tasks that use distributed information technology. The Search for Extraterrestrial Intelligence (SETI) project, for example, effectively created a supercomputer by harnessing the desktop machines of millions of volunteers to process radioastronomy data during 'idle' time.

What about laboratory 'downtime'? There are surely many labs with the apparatus and expertise to conduct relatively simple experiments in a quiet hour or two. "Anyone who can find days to work on applications for grants they don't expect to get can find a few minutes or hours to contribute to other goals they value," Benkler says. Particularly active contributors might even become co-authors on papers resulting from the research, he suggests.

The challenge would be for the scientists instigating such projects to find ways of chopping them up into fine-grained modules that volunteers could undertake with relatively little effort. Not only would this help projects that might otherwise be hard to finance, but it could undermine the encroaching culture of making scientific results proprietary intellectual property.


  1. Benkler Y., et al. Science, 305. 1110 - 1111 (2004).


User Tools [+] Expand

User Tools [-] Collapse

Pinterest button


Please log in to add this page to your favorites list.

Need Assistance?

If you need help or have a question please use the links below to help resolve your problem.