Kicked off in June 1998, with 5,000 people working from home to create and edit a Web-based content directory, Open Directory was one of the first crowdsourcing initiatives. Supplying its content freely to the public, Open Directory eventually became the indexing backbone for many of today's popular search engines.
"If a problem is that hard to solve," Tabb says, "then no one else is solving it either. If your competitors have figured it out, then you can copy them. But if no one has figured it out, then [the problem is] worth revealing."
Netflix, for example, wanted to improve its capability of predicting whether customers would like a particular movie recommendation based on past preferences and the selections of similar individuals. To improve its chances, the company launched the Netflix Prize Web site, offering US$1 million for the best solution. The project, which has yet to award a grand prize, opened the industry leader's customer data sets to the public -- a significant competitive risk that Netflix believes will reap substantial rewards.
"It's about taking a Net-based economy, letting people operate freely within that economy, and getting information from that," Tabb explains, speaking of crowdsourcing's value proposition. "The underlying premise is that the market of the whole will predict better than any of the individuals in the market."
Proof of the crowdsourcing concept
For many organizations, crowdsourcing has become synonymous with predictive markets, which create and tap a community of people to help predict the outcomes of certain scenarios, such as a presidential election. Such markets have the potential to provide invaluable data that goes well beyond what can be gleaned from focus groups, especially when linked with granular demographic information about the participants.
Predictive markets are but one form of crowdsourcing. Another popular mode is "human computing," in which companies create online games for people to play; the outcome of the game is information. Google, for example, sought to index billions of photos. Rather than have folks on staff devote years to tagging and categorizing images, the company launched Google Image Labeler in September 2006.
Google's labeling game is based on the ESP Game created by Luis von Ahn at Carnegie Mellon University. Two random people, called pairs, are connected online. They are shown an image and are asked to label it. The moment they type in the same term, Google's system immediately connects that term with that image. The reasoning is if two people use the same term to describe an image, it is likely that others will as well. The gaming element comes into play because each person is then given a point and assigned a ranking. Top pairs and all-time top contributors are then recognized on the Google Image Labeler home page.
Other initiatives forgo play for pay
Labeling its foray into crowdsourcing "artificial artificial intelligence," Amazon.com has established the Mechanical Turk project, essentially "piece work" for a knowledge-based economy.
With Mechanical Turk, companies create HITs (Human Intelligence Tasks). Everyday people accept these tasks in exchange for small sums of money. The HITs are tasks that humans can quickly and easily accomplish, but would take many hours of programming for computers to carry out -- such as examining a scanned receipt and pulling out specific pieces of data. Mechanical Turk helps companies answer business-essential rank-and-file questions, and the folks answering those questions get paid.
Michael Dell raised eyebrows with the release of Dell Idea Storm. One of the questions Dell put to this crowdsourcing market was, "What product do you really want us to create?" The overwhelming response was to create a laptop that had the option of no operating system or the Linux operating system. Dell listened to its market to significant results: Reports state that Dell has sold more than 40,000 laptops installed with the Linux-based Ubuntu OS.