Rest easy — organic food probably does not make you into a jerk…

My student Eileen Moery and I have a new paper out today in Social Psychology and Personality Science. It’s a replication paper that I’m quite proud of [cite source=’doi’]10.1177/1948550616639649[/cite]. It represents some evolution in how I’m supervising replication projects.

The new paper replicates a study purporting to show that being exposed to images of organic food produces a strong decrease in prosocial behavior and a strong up-tick in being morally judgmental [cite source=’doi’]10.1177/1948550612447114[/cite]. This is a potentially fascinating phenomenon–something like ‘moral licensing’, the ironic effect of good behavior fostering subsequent bad behavior.

The original paper caught fire and the media covered these findings extensively. Rush Limbaugh even crowed about them as evidence of liberal hypocrisy. I noticed the media coverage, and this is how the original study made it onto my ‘possible replication’ list. Eileen found it there, read the paper, and developed a fantastic honors project to put the initial study to the test.

For her project, Eileen contacted the original author to obtain the original materials. She planned and executed a large pre-registered replication attempt. She included a positive control (Retrospective Gambler’s task) so that if the main study ‘failed’ we would have a way to check if it was somehow her fault. She also devised a nice memory manipulation check to be sure that participants were attending to the study materials. She conducted the study and found little to no impact of organic food exposure on moral reasoning and little to no impact on prosocial behavior. She did find the expected outcome on the positive control, though–so sorry, doubters, this was not an example of researcher incompetence.

One of the things I don’t like about the current replication craze is the obsessive emphasis on sample size (this paper is not helping: [cite source=’doi’]10.1177/0956797614567341[/cite]). Sure, it’s important to have good power to detect the effect of interest. But power is not the only reason a study can fail. And meta-analysis allows multiple low-power studies to be combined. So why be so darned focused on the informativeness of a single study? The key, it seems to me, is not to put all your eggs in one basket but rather to conduct a series of replications–trying different conditions, participant pools, etc. The pattern of effects across multiple smaller studies is, to my mind, far more informative than the effect found in a single but much larger study. I’m talking about you, verbal overshadowing [cite source=’doi’]10.1177/1745691614545653[/cite]

Anyways, based on this philsophy, Eileen didn’t stop with 1 study. She conducted another larger study using Mechanical Turk. There are lots of legitimate concerns about MTurk, so we used the quality controls developed in Meg Cusack’s project [cite source=’doi’]10.1371/journal.pone.0140806 [/cite]–screening out participants who don’t speak English natively, who take way too long or too short of a time to complete the study, etc. Despite all this care (and another successful positive control), Eileen still found that organic food produced about 0 change in moral judgments and prosocial behavior.

Still not finished, Eileen obtained permission to conduct her study at an organic food market in Oak Park. Her and I spent two very hot Saturday mornings measuring moral judgments in those arriving at or leaving from the market. We reasoned those leaving from had just bought organic food and should feel much more smug than those merely arriving or passing by. Yes, there are some problems of making this assumption–but again, it was the overall pattern across multiple studies we cared about. And the pattern was once again consistent but disappointing–only a very small difference in the expected direction.

Although Eileen and I were ready to call it quits at this point, our reviewers did not agree. They asked for one additional study with a regular participant pool. Eileen had graduated already, but I rolled up my sleeves and got it done. Fourth time, though, was not the charm–again there was little to no effect of organic food exposure.

With all that said and done, Eileen and I conducted a final meta-anlysis integrating our results. The journal would not actually allow us to report on the field study (too different!?), but across the other three studies we found that organic food exposure has little to no effect on moral judgments (d = 0.06, 95% CI [0.14, 0.26],N=377) and prosocial behavior (d=0.03, 95% CI [?0.17, 0.23],N=377).

So–what’s our major contribution to science? Well, I suppose we have now dispelled what in retrospect is a somewhat silly notion that organic food exposure could have a substantial impact on moral behavior. We are also contributing to the ongoing meta-science examining the reliability of our published research literature–it gives me no joy to say that this ongoing work is largely painting a relatively bleak picture. Finally, I hope that we have now gained enough experience with replication work to be (modestly) showing the way a bit. I hope the practices that are now becoming routine for my honors students (pre-registration, multiple studies, positive controls, careful quality controls, and synthesis through meta-analysis) will become routine in the rest of replication land. No, strike that–these are practices that should really be routine in psychology. Holding my breath.

Oh – an one other important thing about this paper–it was published in the same journal that published the original study. I think that’s exactly as it should be (journals should have to eat their own dog food). Obviously, though, this is exceptionally rare. I think it was quite daring for the journal to have published this replication, and I hope the good behavior of its editors are a model for others and a sign that things really are changing for the better.

Leave a Reply

Your email address will not be published. Required fields are marked *