I Was a Subject in Deena Weisberg’s Study

Two years ago I took a cognitive neuroscience course in which all the students were asked to participate in a psychological study. We filled out an online survey that asked us to rate various explanations for psychological phenomena. I remember hating the survey because all the explanations seemed bad, because it was time-consuming, and because I had no idea what the study was about. Needless to say, I treated it with a healthy dose of apathy.

Now the study, led by PhD student Deena Weisberg, is available online in advance of its publication in the Journal of Cognitive Neuroscience. Apparently it was meant to test how irrelevant neuroscience information can bias our judgments about whether an explanation is good or bad. Here is an example survey question:

weisberg et al 3

Weisberg asked novices, neuroscience students, and experts to take the survey. She found that extraneous neuroscience information made novices and students more likely to endorse bad explanations. Experts did not fall for the neuro-jargon. In fact, they were less likely to endorse the good explanations if they included extraneous information.

Weisberg concludes that we should be careful with our appeals to neuroscience information, especially when the audience is composed of non-experts. For instance, if scientific evidence is presented in a courtroom, jurors might allow it to sway their judgment even if it is irrelevant.

So how does it feel being held up to the scientific community as an exemplar idiot? Well, it’s a bit embarrassing. One of my coping mechanisms has been to criticize the experimental design. For instance, I think its problematic that the with neuroscience explanations were longer than the without neuroscience explantions. If subjects merely skimmed some of the questions (not that I would ever do such a thing), they might be more likely to endorse lengthier explanations.

Another problem I have is the circularity of the rating system. The ‘objective’ ratings for whether explanations were good or bad, and for whether the extra information was indeed irrelevant, were supplied by cognitive neuroscience experts. Is it really surprising then, that this group would show the least amount of bias in judging the explanations?

I could rattle off a few more complaints, but I’ll stop here. In fact, I think the study deserves a lot of the attention that it is getting. With new fMRI studies coming out daily, it’s important not to get sucked in by all the pretty pictures and retain some critical distance.

Leave a comment