More and more social scientists have adopted MTurk as a venue for their research, praising its speed, cost, and diversity relative to undergraduate samples. However, many of them may fail to take into account some other critical aspects that differentiate MTurk samples from undergraduate subject pool samples.
In a paper just published in Behavior Research Methods, we find worker non-naïveté to be a serious concern. One general issue is that MTurk workers share information about HITs with each other publicly and searchably on various forums, including on two different subreddits (see here and here for some collected examples of manipulation checks and common measures that have become common knowledge among workers via forum).
More specifically, while the probability that any worker has seen some manipulation may be low, there is a population of “superturkers”, i.e., extremely prolific workers, who are significantly more likely to end up in your studies. We pooled 16,408 HITs in 132 unique studies, and found that the HITs were completed by 7,498 unique workers. While the average worker completed 2.2 HITs, the top 1% of most prolific workers (15+ HITs) completed 11% of the total, and the top 10% (5+ HITs) completed nearly half (41%) of the total HITs.
As mentioned, superturkers can be problematic because:
- They are more likely to have seen standard manipulations (see figure below)
- They are significantly more likely to read MTurk blogs/forums
- They are significantly more likely to receive notifications from www.turkalert.com each time you (as an academic requester) post new HITs
However, on the plus side, we find that:
- They are less likely to be multitasking while on MTurk
- They are much better at responding and completing follow-up studies (one year later, 75% of these workers completed a follow-up), if you are interested in longitudinal research.
In sum, superturkers are a mixed blessing. MTurk has the capability for more sophisticated designs, including longitudinal studies, and superturkers are reliable enough to make this viable. Since they are less likely to be multitasking, they may also be good participants in studies that require more attention (e.g., reaction time).
However, their non-naïveté, and worker non-naïveté in general, is a serious concern. Researchers can and should take steps to exclude workers from subsequent studies within their lines of research (the paper provides one solution; this method is another good solution if your studies are on Qualtrics). Ideally, they would also communicate with other researchers who do similar work, so that one’s previous participants could be excluded from the other’s study, and vice versa. Beyond that, though, researchers should be very wary about using “classic” manipulations or measures on MTurk. The next blog post will detail the data that supports this admonition.
Chandler, J., Mueller, P., & Paolacci, G. (2014). Nonnaïveté Among Amazon Mechanical Turk Workers: Consequences and Solutions for Behavioral Researchers. Behavior Research Methods, 46(1), 112-130.