OpenNovelty: An LLM-powered Agentic System for Verifiable Scholarly Novelty Assessment
Published in arXiv preprint, 2026
Recommended citation: Yifan Liu, Yifan Wang, Zixuan Li, Zizheng Wang, Zihan Wang, Wenxuan Wang, Yifei Wang, Yifan Song, Yifan Liu, Xuanjing Huang, Zhilin Yang, Wei Chen, Tao Gui: OpenNovelty: An LLM-powered Agentic System for Verifiable Scholarly Novelty Assessment. arXiv preprint arXiv:2601.01576 (2026) https://arxiv.org/abs/2601.01576
The assessment of scholarly novelty is a cornerstone of the academic peer review process, yet it remains a challenging and time-consuming task that heavily relies on human expertise. With the exponential growth of research publications, there is an urgent need for automated systems that can assist in evaluating the novelty of scholarly work. In this paper, we introduce OpenNovelty, an LLM-powered agentic system for verifiable scholarly novelty assessment. OpenNovelty employs a multi-agent architecture where specialized agents collaborate to analyze research papers and assess their novelty through a structured, interpretable process. The system integrates multiple components including a semantic analysis module, a citation network analyzer, and a novelty verification engine. We evaluate OpenNovelty on diverse academic domains and demonstrate its effectiveness in providing reliable novelty assessments with human-interpretable explanations. Our results show that OpenNovelty achieves strong correlation with human expert evaluations while significantly reducing the time and effort required for novelty assessment. The system is designed to be transparent and verifiable, with all assessment decisions traceable through detailed reasoning chains. This work represents a significant step toward automating and democratizing the scholarly novelty assessment process.
