AI-Generated Studies Spark Controversy at ICLR
The International Conference on Learning Representations (ICLR) is currently facing scrutiny due to the submission of AI-generated research papers by several artificial intelligence laboratories. This situation raises questions about the integrity of the peer review process in academic publishing.
AI Labs Involved
Three prominent AI labs have reported submitting studies generated by their artificial intelligence systems to ICLR workshops:
Sakana, prior to submission, notified ICLR officials and gained consent from peer reviewers for their AI-generated papers. In contrast, Intology and Autoscience proceeded without consulting the conference organizers, a situation confirmed by an ICLR spokesperson.
Reactions from the Academic Community
The reaction from academics has been largely critical. Many researchers believe that the actions taken by Intology and Autoscience undermine the peer review process. For instance, Prithviraj Ammanabrolu, an assistant professor at UC San Diego, expressed his concerns on social media, stating, “All these AI scientist papers are using peer-reviewed venues as their human evals, but no one consented to providing this free labor.”
The Value of Peer Review
Peer review is recognized as a labor-intensive process, often involving volunteer reviewers. A survey published in Nature revealed that around 40% of academics dedicate between two to four hours per study review. With the growing number of submissions across conferences, including a 41% increase at NeurIPS in the previous year, the workload on peer reviewers is intensifying.
The Prevalence of AI-Generated Content
The issue of AI-generated text in academic submissions is not new. An analysis found that approximately 6.5% to 16.9% of papers submitted to AI conferences in 2023 contained synthetic text. However, the recent use of AI-generated studies in peer-reviewed environments raises additional ethical concerns.
Citation Mistakes and Withdrawal of Papers
Sakana acknowledged that their AI produced significant citation errors and concluded that only one of their three submitted papers would have met the criteria for acceptance at ICLR. To maintain transparency and adhere to ICLR conventions, Sakana chose to withdraw their paper.
Calls for Regulation and Further Discussion
Some academics, like Alexander Doria from Pleias, argue for the establishment of a regulated body to assess AI-generated studies. Doria emphasized the necessity of compensating researchers for their evaluations, stating, “Evals [should be] done by researchers fully compensated for their time. Academia is not there to outsource free [AI] evals.”