Evaluation of Crowdsourced Peer Review using Synthetic Data and Simulations

Abstract

The scholarly publishing process relies on peer review to uphold the quality of scientific knowledge. However, challenges such as increasing submission volumes and potential malicious behavior undermine its effectiveness. In this study, we evaluate Readersourcing, an alternative peer review approach that leverages community-driven judgments. Using simulations with synthetic data based on a probabilistic model and a publicly available implementation, we assess six quantities and examine the impact of each component on the outcomes. Our findings show that the co-determination algorithm captures distinct aspects of manuscript judgments compared to simpler aggregation strategies. Key simulation parameters consistently influence the computed quantities across different settings. We also publicly release the data, code, and simulation runs.

Publication
Proceedings of the 21st Conference on Information and Research Science Connecting to Digital and Library Science.

Related