Enhancing Fact-Checking: From Crowdsourced Validation to Integration with Large Language Models

Abstract

Information retrieval effectiveness evaluation is often carried out by means of test collections. Many works investigated possible sources of bias in such an approach. We propose a systematic approach to identify bias and its causes, and to remove it, thus enforcing fairness in effectiveness evaluation by means of test collections.

Publication
Proceedings of the 14th Italian Information Retrieval Workshop.

Related