Background: Between-study variance allows to work easily with random effects models in meta-analysis, which contemplate more sources of variability than the fixed effect model, favoring the generalization of the meta-analytic results. Between-study variance is one of the most important parameters in random-effects meta-analyses, as it is needed to describe the parametric distribution of the effect under study and to estimate the mean effect. There are currently a wide variety of point estimators for the between-study variance and several ways to obtain its confidence interval. Both, point and confidence interval estimators, differ in the method of estimation (method of the moments, maximum likelihood, Bayesian, or non-parametric methods), and computation (iterative or analytical). Moreover, it is worth noting that confidence intervals differ in whether they need to be built from a point estimate or not. Previous studies have shown that choosing different estimators for the between-study variance may lead to different statistical conclusions. Early simulation works also show that these estimators present differences in bias, efficiency, and confidence interval coverage rate depending on variables such as the magnitude of the between-study variance, the number of studies, or the sample size of the primary studies included in the meta-analysis. However, random-effects model is based on the assumption that the parametric distribution of effect sizes is normally distributed, which is widely violated in Psychology, especially in reliability generalization meta-analyses where its effect size (reliability coefficient) is asymmetrically distributed.
Objectives and Method: As we were unable to find literature on the performance of between-study variance estimators when the parametric distribution departs from normal, the present study uses the Monte Carlo simulation with the aim of: Compare the results in terms of bias, efficiency, confidence interval coverage rate and width of all the available estimators in non-normal contexts, and check whether the results found in previous theoretical and simulation studies are replicated.
Results and Conclusions: Still in development.