Skip to contents

This data-generating mechanism simulates primary studies estimating treatment effects using Cohen's d. The observed effect size is modeled as a fixed mean plus random heterogeneity across studies, with sample sizes varying to generate differences in standard errors. The simulation introduces publication bias via a selection algorithm where the probability of publication depends nonlinearly on the sign and p-value of the effect, with regimes for no, medium, and strong publication bias. It also incorporates questionable research practices (QRPs) such as optional outlier removal, selection between dependent variables, use of moderators, and optional stopping.

The description and code is based on Hong and Reed (2021) . The data-generating mechanism was introduced in Carter et al. (2019) .

Usage

# S3 method for class 'Carter2019'
dgm(dgm_name, settings)

Arguments

dgm_name

DGM name (automatically passed)

settings

List containing

mean_effect

Mean effect

effect_heterogeneity

Mean effect heterogeneity

bias

Degree of publication bias with one of following levels: "none", "medium", "high".

QRP

Degree of questionable research practices with one of following levels: "none", "medium", "high".

n_studies

Number of effect size estimates

Value

Data frame with

yi

effect size

sei

standard error

ni

sample size

es_type

effect size type

Details

This simulation environment is based on the framework described by Carter, Schönbrodt, Gervais, and Hilgard (2019). In this setup, primary studies estimate the effect of a treatment using Cohen's d as the effect size metric. The observed difference between treatment and control groups is modeled as the sum of a fixed effect (alpha1) and a random component, which introduces effect heterogeneity across studies. The degree of heterogeneity is controlled by the parameter sigma2_h. Variability in the standard errors of d is generated by simulating primary studies with different sample sizes.

The simulation incorporates two main types of distortions in the research environment. First, a publication selection algorithm is used, where the probability of a study being "published" depends nonlinearly on both the sign of the estimated effect and its p-value. Three publication selection regimes are modeled: "No Publication Bias," "Medium Publication Bias," and "Strong Publication Bias," each defined by different parameters in the selection algorithm. Second, the simulation includes four types of questionable research practices (QRPs): (a) optional removal of outliers, (b) optional selection between two dependent variables, (c) optional use of moderators, and (d) optional stopping.

References

Carter EC, Schönbrodt FD, Gervais WM, Hilgard J (2019). “Correcting for bias in psychology: A comparison of meta-analytic methods.” Advances in Methods and Practices in Psychological Science, 2(2), 115-144. doi:10.1177/2515245919847196 .

Hong S, Reed WR (2021). “Using Monte Carlo experiments to select meta-analytic estimators.” Research Synthesis Methods, 12(2), 192-215. doi:10.1002/jrsm.1467 .