To Use or Not to Use AI as Referees in Scientific Journals? Proposal for a More Efficient Model
Palabras clave:
Peer review, Artificial intelligence, Large language models, Academic publishing, AI ethics, Hybrid peer review modelResumen
The peer review process plays a critical role in ensuring the quality, reliability and validity of scientific publications. However, traditional peer review faces increasing challenges, including prolonged review times, reviewer fatigue, and inherent biases, which impact its efficiency and objectivity. Recent advancements in artificial intelligence (AI), particularly in natural language processing (NLP) and large language models (LLMs), offer promising solutions for automating aspects of peer review. This study explores the feasibility of AI-driven peer review, assessing its potential benefits and limitations. AI tools can expedite manuscript screening, detect plagiarism, verify statistical accuracy, and ensure adherence to journal guidelines, reducing the burden on human reiewers and accelerating publication timelines. However, concerns persist regarding AI’s lack of critical judgment, potential algorithmic biases, and transparency issues. A hybrid AI-human peer review model is proposed, where AI handles initial screening and technical checks while human reviewers focus on intellectual and contextual evaluations. This integrated approach could enhance the efficiency and fairness of the peer-review process while maintaining scientific rigor. The study underscores the need for ethical frameworks transparency measures, and continuous improvements to AI systems to ensure responsible AI integration in academic publishing.
Descargas
Descargas
Publicado
Número
Sección
Licencia
Derechos de autor 2025 Mónica Gozalbo, Carla Soler, Nadia San Onofre, José M. Soriano (Autor/a)

Esta obra está bajo una licencia internacional Creative Commons Atribución-NoComercial 4.0.
Open Access Journal.
Edita: IBERAMIA. Sociedad Iberoamericana de Inteligencia Artificial (www.iberamia.org).