Amazon Mechanical Turk (AMT) is a web service offered by Amazon.com. It works as a labor market which allows requesters to post tasks a.k.a. HITs (Human Intelligence Tasks) and workers to perform such tasks in exchange for a monetary payment. Examples of tasks include categorizing images, transcribing audio recordings, and testing websites. AMT is also known as a crowdsourcing marketplace, outsourcing activities to communities of internet users. Everyday hundreds of projects, each one comprising one to many HITs, are completed by thousands of workers. Monetary payments are highly variable, ranging from less than 5 cents for HITs that last just a few minutes to more than a dollar for longer ones.
AMT provides a potentially revolutionary resource for making experiments in social sciences, allowing the recruitment of large quantities of subjects in short times.
AMT can be a powerful tool for rapid (and potentially cheap) pilot studies during the development of new experimental ideas. It is easy to obtain feedback from hundreds of subjects in a few days at the cost of a few dollars. But can it be used also for “real” experimentation – we mean experiments that can be regarded as reliable sources of information on human behavior at levels comparable to ordinary laboratory experiments?
There are many issues at stake. These are both related to the “web-based” nature of the experiments (e.g., identity control) and to the features and norms of AMT as a labor market (e.g. ecological validity of studies with such small payments).
However, the truly important test is whether experiments performed with AMT can reliably reproduce results obtained in classical laboratory experiments.