13:00 - 14:00
Chair/s:
Michael Bosnjak (ZPID - Leibniz Institute for Psychology, Trier, Germany)
Employees (and managers) who listen well are claimed to be superior job performers across occupations and industries. However, the evidence for this claim is scattered across fields, including Marketing, Nursing, Communication, and Management. To assess the effects of listening, we propose a theoretical account predicting that the perception that an employee listens well is associated with (a) speaker’s positive affect, (b) listener’s knowledge, and (c) relationship quality between listener and speaker, and consequently (d) listener’s and speaker’s job performance. To assess our account’s plausibility, we propose the registration of a comprehensive systematic review and meta-analyses of the effects of listening on these four outcomes. For the systematic review, we consider challenges in searching ProQuest and EBSCO databases. We propose to use covidence.org for the extraction of papers. To extract information from included papers and minimize coder errors, we constructed a user-friendly Qualtrics survey. The Qualtrics survey accepts input from EndNote, and it is read by R. We demonstrate, with four studies, how our R code will be used to calculate inter-judge agreement, flag discrepancies between coders, correct errors, and perform three-level meta-analyses, testing our hypothesis and a host of potential methodological moderators.
14:00 - 15:30
Chair/s:
Lisa Spitzer (Leibniz Institute for Psychology (ZPID), Trier, Germany)
15:30 - 16:00
Coffee break
16:00 - 17:00
Chair/s:
Tanja Burgard (ZPID, Trier, Germany)
The low replication rates observed in psychology imply either that the psychological literature is filled with false positive results or that psychological effects vary considerably even when procedures and methods of primary studies are recreated as closely as possible. The latter possibility has recently sparked much debate about heterogeneity in psychological effects and provoked heightened interest in meta-analytical estimates of such heterogeneity. Several authors have argued from a meta-theoretical perspective that heterogeneity has to be expected due to the typically large number of relevant but unknown (and, hence, uncontrolled) factors and moderators. In contrast, direct replications are clearly designed and conducted under the assumption that faithfully reinstating the procedures of original studies should suffice to replicate their findings. In parallel to these debates, several large scale projects (Many Labs, Registered Replication Reports) provided data that, for the first time, allow for an empirical assessment of heterogeneity across a sizeable number of directly replicated psychological effects. In the first part of this talk, I will give an overview of the current debate around heterogeneity and summarize the results of previous meta-analytical heterogeneity estimates in replication projects. In the second part, I will discuss several problems with these estimates stemming from both the methods applied in conducting replications and in meta-analyzing their results. A central theme of this discussion is that there is little reason to expect homogeneity in psychological replications even when moderators are absent. In turn, heterogeneity does not imply the existence of moderator effects but may simply be due to several methodological artifacts.
17:00 - 18:30
Chair/s:
Anita Chasiotis (ZPID, Trier, Germany)