AIDME: A Scalable, Interpretable Framework for AI-Aided Scoping Reviews

Abstract

Scientific publishing is expanding rapidly across disciplines, making it increasingly difficult for researchers to organize, filter, and synthesize the literature. Systematic reviews address this challenge through structured analysis, but the early stages, particularly the screening phase, can become overwhelming when faced with thousands of records. Scoping reviews are often used as a preparatory step to explore and structure the literature before applying stricter protocols such as the PRISMA 2020 guidelines. In this work, we introduce AIDME (AI-Aided Document Mapping and Evaluation), a general-purpose framework that leverages Large Language Models (LLMs), topic modeling, thematic labeling, and citation network analysis to support the creation of scoping reviews in research areas with high publication volume. AIDME enables scalable filtering, clustering, labeling, and prioritization of publications while preserving human oversight. We evaluate the proposed framework through a case study on methods for assessing truthfulness in fact-checking, a fast-evolving field characterized by inconsistent terminology and fragmented methodologies. Our results show that AIDME reduces manual effort and produces structured outputs that facilitate subsequent PRISMA-compliant systematic reviews.

Publication
Proceedings of the 2025 International ACM SIGIR Conference on Innovative Concepts and Theories in Information Retrieval (ICTIR)

Related