Posted by: Gabriele Paolacci | January 20, 2012

AMT symposium at SPSP 2012

We organized a Mechanical Turk symposium at the Annual Meeting of the Society for Personality and Social Psychology (January 26-28 2012, San Diego, CA). The symposium will provide information for researchers at any level of MTurk experience, beginning with an introduction to AMT, and moving through research and tutorials on improving data quality, collecting data across time, incentivizing workers, and conducting true group dynamics experiments. It will take place on Saturday, January 28, 9:45 – 11:00 am, in Room 26. Below you can find more details about the talks.

We plan to share on the blog some symposium-related information and material, so stay tuned!

Research Using Mechanical Turk: Getting The Most Out of Crowdsourcing
Chair: Jesse Chandler, 
Co-Chair: Pam Mueller

Mechanical Turk: An Introduction and Initial Evaluation
Michael Buhrmester, Tracy Kwang, Sam Gosling
Mechanical Turk is a unique online marketplace that contains the major elements required to conduct research: a simple participant payment system; access to a large participant pool; and a streamlined interface for study design, participant recruitment, and data collection. After introducing these fundamental mechanics, we will evaluate findings that bear on MTurk’s potential validity and suitability for research purposes. Our findings indicate that (a) MTurk participants are more demographically diverse than typical college samples; (b) under the right parameters, participants can be recruited rapidly and inexpensively without affecting data quality; and (c) MTurk data can be just as reliable as data obtained via traditional methods. Finally, to help ease concerns of novice users, we will provide a quick beginner’s walkthrough of the major elements required to get a study off the ground.

Are Your Participants Gaming The System? Improving Data Quality on Mechanical Turk
Julie S. Downs, Mandy B. Holbrook
Amazon’s Mechanical Turk provides an efficient means to recruit large samples quickly, but this ease may be at the cost of lower control over data quality. Previous research has shown that simple measures (e.g. time stamps) are insufficient to differentiate between conscientious workers and people looking for free money. Popular strategies for quality control, including instructional manipulation checks, only identify the most egregious attentional lapses. Additionally, they violate Gricean conversational norms, breach the scientific trust relationship, and bias the study sample. Unlike most MTurk tasks, psychological surveys cannot be assessed directly for worker performance without violating scientific impartiality and ethical edicts against punishing participants for their responses. In this talk, we will present an empirical assessment of other strategies for restricting data collection and data retention to those truly participating in the study, and will discuss the implications for generalizability of MTurk data in general and when using screening procedures.

Advanced Uses of Mechanical Turk Crowdsourcing in Psychological Research
Pam A. Mueller, Jesse J. Chandler, Gabriele Paolacci
Mechanical Turk has many tools and capabilities that can be quite useful for behavioral researchers, but which are not immediately evident to users. We discuss the features of MTurk that give it important advantages over other online collection methods, and address the problem of duplicate participants across programmatic research. We also present an introduction to advanced uses of MTurk for researchers with minimal programming knowledge. These tools improve data quality by, for instance, allowing workers to be incentivized, preventing workers from completing related studies, and facilitating direct communication between a requester and workers. We also discuss how these tools enable more sophisticated data collection (e.g. prescreening, longitudinal studies). We demonstrate the effectiveness of these techniques through their implementation in our own work, and provide a potential solution to the issues that may arise as MTurk workers become non-naïve participants through their involvement in numerous behavioral studies.

Conducting Synchronous Experiments on Mechanical Turk
Siddharth Suri, Winter Mason
Crowdsourcing platforms, including Amazon’s Mechanical Turk, are a new and fruitful means of conducting online research for relatively low cost. However, many psychological studies require groups of participants to interact synchronously, and the mechanisms for accomplishing this on MTurk are not built-in and are far from obvious. We will describe a technique we have developed for accomplishing this, which has four key components: recruitment of participants into a panel, notification of a start time, a “waiting” room that accumulates participants up to a threshold, and methods for handling attrition. We will discuss some common pitfalls associated with running synchronous experiments online and with crowdsourcing platforms, and demonstrate the efficacy of our technique with research we have conducted.

Advertisements

Responses

  1. […] January 28, SPSP 2012 hosted a symposium on AMT. Below are the links to download the slides used by the four speakers. In the next days we […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

%d bloggers like this: