Posted by: Gabriele Paolacci | October 12, 2009


We submitted the “african countries problem” from Tversky and Kahneman (1974) to 152 workers (61.2% women, mean age = 35.4). Participants were paid $0.05 for a HIT that comprised other unrelated brief tasks. Approximately half of the participants was asked the following question:

Do you think there are more or less than 65 African countries in the United Nations?

The other half was asked the following question:

Do you think there are more or less than 12 African countries in the United Nations?

Both the groups were then asked to estimate the number of African countries in the United Nations.

As expected, participants exposed to the large anchor (65) provided higher estimates than participants exposed to the small anchor (12), F(1,150) = 55.99, p < .001. Therefore, we were able to replicate a classic anchoring effect –  our participants’ judgments are biased toward the implicitly suggested reference points. It should be noted that means in our data (42.6 and 18.5 respectively) are very similar to those recently published by Stanovich and West (2008; 42.6 and 14.9 respectively).


Stanovich, K. E., West. R. F. (2008). On the relative independence of thinking biases and cognitive ability. Journal of Personality and Social Psychology, 94, 672-695.

Tversky, A., Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185, 1124-1131.


  1. I think you did this wrong! The anchor you provided was a totally legitimate thing to condition one’s estimate on.

    What’s really interesting is that people anchor on numbers they *know* have nothing to do with what they’re estimating. I vaguely recall one study where the researcher had the subjects write down the last couple digits of their social security numbers and then had them say how much they’d be willing to pay for some item. Amazingly, people anchored on their own social security numbers!

    It’s unsurprising, and I would say rational, that people exposed to the higher anchor in your experiment made higher estimates.

  2. Also, sorry to only jump in when I had a quibble. What you’re doing is awesome and amazing. Keep up the great work!

    Also, my quibble isn’t even with you but with the experiment you’re replicating.

  3. Hey Daniel,

    thanks for your comment.

    What we replicated was just one of the first and classic anchoring tests. It’s not the goal of the blog to discuss the actual validity of the experiments we replicate, but let me point out I think it’s still a nice example of this bias.

    Anyway I agree that there have been more striking demonstrations of anchoring during the years, with anchors that were not even related to problem setting. The study you are talking about by Ariely, Loewenstein, and Prelec is a great one.

    Thanks for your encouraging words!

  4. […] results have been reconfirmed by the Experimental Turk. For instance, in the “anchoring” experiment (more about anchoring here), 152 workers were asked a question about how many […]

  5. […] to a task such as when verification questions interrupt the flow of a worker’s thinking, or anchor a worker’s subsequent […]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s


%d bloggers like this: