Header


Wednesday, March 16, 2016

The Weapons Priming Effect, Pt. 2: Meta-analysis

Meta-Analysis
Even in the 1970s the Weapons Priming Effect was considered hard to believe. A number of replications were conducted, failed to find an effect, and were published (Buss, Booker, & Buss, 1972; Ellis, Weiner, & Miller, 1971; Page & Scheidt, 1971).

Remarkable to think that in 1970 people could publish replications with null results, isn't it? What the hell happened between 1970 and 2010? Anyway...

To try to resolve the controversy, the results were aggregated in a meta-analysis (Carlson et al., 1990). To me, this is an interesting meta-analysis. It is interesting because the median cell size is about 11, and the largest is 52. 80% of the cells are of size 15 or fewer.

Carlson et al. concluded "strong support" for "the notion that incidentally-present negative or aggression cues generally enhance aggressiveness among individuals already experiencing negative affect." However, across all studies featuring only weapons as cues, "a nonsignificant, near-zero average effect-size value was obtained."

Carlson et al. argue that this is because of two equal but opposite forces (emphasis mine):
Among subjects whose hypothesis awareness or evaluation apprehension was specifically elevated by an experimental manipulation or as a natural occurrence, as determined by a post-session interview, the presence of weapons tended to inhibit aggression. In contrast, the presence of weapons enhanced the aggression of nonapprehensive or less suspicious individuals.

In short, Carlson et al. argue that when participants know they're being judged or evaluated, seeing a gun makes them kick into self-control mode and aggress less. But when participants are less aware, seeing a gun makes them about d = 0.3 more aggressive.

I’d wanted to take a quick look for potential publication bias. I took the tables out of the PDF and tried to wrangle them back into CSV. You can find that table and some code in a GitHub repo here.

So far, I've only been able to confirm the following results:

First, I confirm the overall analysis suggesting an effect of aggression cues in general (d = 0.26 [0.15, 0.36]). However, there's a lot of heterogeneity here (I^2 = 73.5%), so I wonder how helpful a conclusion that is.

Second, I can confirm the overall null effect of weapons primes on aggressive behavior (d = 0.05, [-0.21, 0.32]). Again, there's a lot of heterogeneity (I^2 = 71%).

However, I haven't been able to confirm the stuff about splitting by sophistication. Carlson et al. don't do a very good job of reporting these codings in their table. They'll mention in a cell sometimes "low sophistication." As best I can tell, unless the experimenter specifically reported subjects as being hypothesis- or evaluation-aware, Carlson et al. consider the subjects to be naive.

But splitting up the meta-analysis this way, I still don't get any significant results -- just a heap of heterogeneity. Among the Low Awareness/Sophistication group, I get d = 0.17 [-0.15, 0.49]. Among the High Awareness/Sophistication group, I get d = -0.30 [-0.77, 0.16]. Both are still highly contaminated by heterogeneity (Low Awareness: 76% I^2; High Awareness: 47% I^2), indicating that maybe these studies are too different to really be mashed together like this.

There's probably something missing from the way I'm doing it vs. how Carlson et al. did it. Often, several effect sizes are entered from the same study. This causes some control groups to be double- or triple-counted, overestimating the precision of the study. I'm not sure if that's how Carlson et al. handled it or not.

It goes to show how difficult it can be to replicate a meta-analysis even when you've got much of the data in hand. Without a full .csv file and the software syntax, reproducing a meta-analysis is awful.

A New Meta-Analysis
It'd be nice to see the Carlson et al. meta-analysis updated with a more modern review. Such a review could contain more studies. The studies could have bigger sample sizes. This would allow for better tests of the underlying effect, better adjustments for bias, and better explorations of causes of heterogeneity.

Arlin Benjamin Jr. and Brad Bushman are working on just such a meta-analysis, which seems to have inspired, in part, Bushman's appearance on Inquiring Minds. The manuscript is under revision, so it is not yet public. They've told me they'll send me a copy once it's accepted.

It's my hope that Benjamin and Bushman will be sure to include a full .csv file with clearly coded moderators. A meta-analysis that can't be reproduced, examined, and tested is of little use to anyone.

No comments:

Post a Comment