The P-Value Ban, Revisited: An Interview with David Trafimow, PhD Basic and Applied Social Psychology

Basic and Applied Social Psychology

Routledge Psychology features a portfolio of more than 160 titles, many society-affiliated. The Editors of our journals are leaders in their fields. Dr. David Trafimow is Editor of Basic and Applied Social Psychology, which publishes 6 times per year and holds a current Impact Factor of 3.426 © 2018 Clarivate Analytics, 2017 Journal Citation Reports®, 8th out of 64 journals in Social Psychology. Dr. Trafimow’s leadership relative to peer review has influenced research submissions in psychology, statistics, biology, and economics.

Interview with David Trafimow, PhD, New Mexico State University
Interviewed by Hannah White of Taylor & Francis Group

Listen to the interview, or read the transcript below.

Hannah White: This is Hannah White from Taylor & Francis, and I'm here with Dr. David Trafimow, Professor at New Mexico State University and Editor-in-Chief of Basic and Applied Social Psychology. In 2015, Dr. Trafimow published an Editorial [along with Michael Marks—ed.] stating that the journal would no longer accept papers relying on certain statistical methods, including the null hypothesis significance testing procedure, or NHSTP.

Three years later, Dr. Trafimow joins me to look back on the p-value ban and to assess its impact on both the journal and the field at large. Thank you so much for being here, Dr. Trafimow. Let's begin with some background. What role has the NHSTP traditionally played in psychological research?

David Trafimow: Traditionally, null hypothesis significance testing has played an extremely important role in research. This is because it was used as a gateway for whether submitted papers could be published or not. If a finding [did not reach] statistical significance, then that was taken as indicating that the finding was not reliable or not replicable or not real, pick your word—and so there was almost no chance that the paper would be accepted. In contrast, if statistical significance was achieved, then there would be more to the process. If the reviewers deemed the paper to be interesting, not to have fatal flaws in the design and so on, then the paper could be accepted. So, significance testing has played an extremely important role in research and in the ability to publish research in the past.

Hannah White: So, given this important role in the past, why did you decide to institute the p-value ban in papers published in Basic and Applied Social Psychology, and what factors played into your decision?

David Trafimow: The main reason is because I felt that null hypothesis significance testing caused a huge amount of harm to the field of psychology, including social psychology. You may recall with my previous answer, I talked about the perception that p tells you whether the finding is real, replicable, and so on, reliable. None of those are actually true. P-values don't tell you any of those things. Now, people who are sophisticated in statistics understand that, but far too many researchers, including both authors and reviewers, do not understand that. And so, some of the harms, in addition to or because of this wrong perception, is that it has caused binary thinking, where people assume a finding is there or not there, and not enough attention to what the actual effect size is. And even those cases where researchers have reported effect sizes, it's resulted in an overestimation of effect sizes.

The reason this has happened is because to pass the bar, so to speak, you need to have a strong enough effect size that you get statistical significance. But there's a lot of randomness in whether that happens or not, and so, as a result, it's the one—and that's just the papers that have gotten lucky, that have been published, which has led to an overestimation of effect sizes. Empirical support for my claim came out of the recent replication project, where they found that the average effect size in the replication cohort of studies was about half that in the original cohort of studies. Another problem, of course, that comes out of that same paper, is that there are a lot of replication problems. The field of social psychology is rife with effects that have not been replicated or people have tried to replicate them and failed to replicate. There's huge harm, and frankly, going the other way, I don't really see where it's done much good.

Hannah White: In your experience as editor over the past few years, has this ban—the p-value ban—caused any authors to change their conclusions?

David Trafimow: Yes, it has. Very often, researchers have no hypothesis significance testing in their original submissions. Then when I actually have them look more closely at the data, they realize, or I realize for them, that the data, rather than confirming the hypothesis, actually goes in the other direction. Let me give you a common example that I've encountered. A lot of researchers use a two-by-two design, where two variables are manipulated simultaneously. The hypothesis is that there should be an effect of one variable under one level of the other variable, but not under the other level. Seemingly consistent with that hypothesis, the researcher gets a statistically significant interaction.

But here comes the fun part: after looking at the means, it becomes clear that the effect happened under both levels of the other variable, with the effect being larger under one level than under the other level. You see, the effect happened regardless. It's just that it happened more in one case than the other case. Since the hypothesis was that the effect shouldn't happen in one case, the fact that it happened in both cases is actually inconsistent rather than consistent with the hypothesis. When I point this out to authors, they see it, eventually, and then they change their interpretation.

Hannah White: That's really interesting. Do you get submitted papers, and then when you review them, you notice this? Do you point that out to them?

David Trafimow: Yeah, I point that out. I'm somewhat fanatical about actually looking at the data, rather than just depending on p-values to draw conclusions. Of course, it often happens that when authors look at the data carefully, they don't change their conclusions, but it happens fairly often that they do change their conclusions. I'd really like to urge everybody to actually look carefully at your data.

Hannah White: Speaking about the journal as a whole, this definitely made a lot of news. How has the journal been affected since this p-value ban, and in particular, what has been the effect on the citation rate of the journal?

David Trafimow: I would say that, I don't have exact numbers at my fingertips, but I would say that the citation rate has more than doubled since the ban. I consider that a very good reaction. The news isn't all good. There's also been, at least at first, there was a decline in the rate in which authors submitted papers. However, the journal seems to have snapped back from that. As I was saying before, the citation rate has more than doubled, which I consider to be a good outcome.

Hannah White: Outside of the journal, in the larger academic research community, have you observed other effects taking place? Has it been spreading at all, this sort of new thinking?

David Trafimow: Yes, it has. I've been extremely happy about what has been going on. The American Statistical Association had a symposium on statistical inference a few months ago. I think it was in October of 2017. My perception of that symposium was that null hypothesis significance testing was roundly criticized, with the vast majority against it and very few people in favor of it. Of course, this is mostly statisticians, and that doesn't mean that in substantive areas, researchers will stop doing it. But I was encouraged that at least the statisticians seem to really understand a lot of what's wrong with null hypothesis significance testing.

In addition to that, a lot of people are writing textbooks that explain the problems with significance testing, or even do without it. I actually see some emails from people saying that they are doing that. Of course I find that very encouraging. It's also becoming a topic in a lot of different substantive areas, as researchers become increasingly concerned with that. I would say that the trend looks promising.

Hannah White: That's great. I think you mentioned in an essay you wrote for T&F that you'd observed this throughout various fields of study, not just social psychology.

David Trafimow: That's correct. In fact, at kind of a personal level, I've been invited to give presentations to people in a whole bunch of different areas of the medical and life sciences, biology, and in fact next January, even economics. I'm hopeful that as more and more areas continue to see the problem with significance testing, maybe we can get it out of science and replace it with something better.

Hannah White: Finally, based on your experience sort of making this stand with the null hypothesis, and just being an editor, what advice would you give to researchers or to future researchers who are looking to publish their work in a journal like yours?

David Trafimow: One of the interesting things about Basic and Applied Social Psychology is that contributions can be made either with respect to basic research in social psychology, or to applied research in social psychology, or both. Now, one of the mistakes that authors sometimes make is that they try to contribute at both levels, which is fine if they succeed. But they usually don't. It's very difficult to contribute to both levels at the same time. In fact, it's even difficult to contribute at either level by itself. Sometimes, by trying too hard to be both basic and applied, they end up not contributing at either level. Then I have to explain that they haven't succeeded in making the contribution, or at least not one sufficient for publication, and they don't get published. My advice would be to be really clear about what the contribution is that you want to make, before you even start doing the research.

Hannah White: Yes, that's great advice. Well, I think that brings us to the end of our interview here. Thank you again so much for sharing your thoughts, Dr. Trafimow. For those listening, you can find more about Basic and Applied Social Psychology at And we'll stop it there.