Simple study designs in ecology produce inaccurate estimates of biodiversity responses.

Published online
23 Jul 2020
Content type
Journal article
Journal title
Journal of Applied Ecology
DOI
10.1111/1365-2664.13499

Author(s)
Christie, A. P. & Amano, T. & Martin, P. A. & Shackelford, G. E. & Simmons, B. I. & Sutherland, W. J.
Contact email(s)
apc58@cam.ac.uk

Publication language
English

Abstract

Monitoring the impacts of anthropogenic threats and interventions to mitigate these threats is key to understanding how to best conserve biodiversity. Ecologists use many different study designs to monitor such impacts. Simpler designs lacking controls (e.g. Before-After (BA) and After) or pre-impact data (e.g. Control-Impact (CI)) are considered to be less robust than more complex designs (e.g. Before-After Control-Impact (BACI) or Randomized Controlled Trials (RCTs)). However, we lack quantitative estimates of how much less accurate simpler study designs are in ecology. Understanding this could help prioritize research and weight studies by their design's accuracy in meta-analysis and evidence assessment. We compared how accurately five study designs estimated the true effect of a simulated environmental impact that caused a step-change response in a population's density. We derived empirical estimates of several simulation parameters from 47 ecological datasets to ensure our simulations were realistic. We measured design performance by determining the percentage of simulations where: (a) the true effect fell within the 95% Confidence Intervals of effect size estimates, and (b) each design correctly estimated the true effect's direction and magnitude. We also considered how sample size affected their performance. We demonstrated that BACI designs performed: 1.3-1.8 times better than RCTs; 2.9-4.2 times versus BA; 3.2-4.6 times versus CI; and 7.1-10.1 times versus After designs (depending on sample size), when correctly estimating true effect's direction and magnitude to within ±30%. Although BACI designs suffered from low power at small sample sizes, they outperformed other designs for almost all performance measures. Increasing sample size improved BACI design accuracy, but only increased the precision of simpler designs around biased estimates. Synthesis and applications. We suggest that more investment in more robust designs is needed in ecology since inferences from simpler designs, even with large sample sizes may be misleading. Facilitating this requires longer-term funding and stronger research-practice partnerships. We also propose 'accuracy weights' and demonstrate how they can weight studies in three recent meta-analyses by accounting for study design and sample size. We hope these help decision-makers and meta-analysts better account for study design when assessing evidence.

Key words