There’s an argument brewing over “AI-generated” research submitted to this 12 months’s ICLR, a long-running tutorial convention centered on AI.
At the least three AI labs — Sakana, Intology, and Autoscience — declare to have used AI to generate research that had been accepted to ICLR workshops. At conferences like ICLR, workshop organizers usually assessment research for publication within the convention’s workshop observe.
Sakana knowledgeable ICLR leaders earlier than it submitted its AI-generated papers and obtained the peer reviewers’ consent. The opposite two labs — Intology and Autoscience — didn’t, an ICLR spokesperson confirmed to Trendster.
A number of AI lecturers took to social media to criticize Intology and Autoscience’s stunts as a co-opting of the scientific peer assessment course of.
“All these AI scientist papers are utilizing peer-reviewed venues as their human evals, however nobody consented to offering this free labor,” wrote Prithviraj Ammanabrolu, an assistant pc science professor at UC San Diego, in an X put up. “It makes me lose respect for all these concerned no matter how spectacular the system is. Please disclose this to the editors.”
Because the critics famous, peer assessment is a time-consuming, labor-intensive, and largely volunteer ordeal. In line with one latest Nature survey, 40% of lecturers spend two to 4 hours reviewing a single examine. That work has been escalating. The variety of papers submitted to the biggest AI convention, NeurIPS, grew to 17,491 final 12 months, up 41% from 12,345 in 2023.
Academia already had an AI-generated copy downside. One evaluation discovered that between 6.5% and 16.9% of papers submitted to AI conferences in 2023 possible contained artificial textual content. However AI firms utilizing peer assessment to successfully benchmark and promote their tech is a comparatively new incidence.
“[Intology’s] papers obtained unanimously constructive opinions,” Intology wrote in a put up on X touting its ICLR outcomes. In the identical put up, the corporate went on to say that workshop reviewers praised considered one of its AI-generated examine’s “intelligent thought[s].”
Teachers didn’t look kindly on this.
Ashwinee Panda, a postdoctoral fellow on the College of Maryland, stated in an X put up that submitting AI-generated papers with out giving workshop organizers the correct to refuse them confirmed a “lack of respect for human reviewers’ time.”
“Sakana reached out asking whether or not we might be prepared to take part of their experiment for the workshop I’m organizing at ICLR,” Panda added, “and I (we) stated no […] I feel submitting AI papers to a venue with out contacting the [reviewers] is dangerous.”
Not for nothing, many researchers are skeptical that AI-generated papers are definitely worth the peer assessment effort.
Sakana itself admitted that its AI made “embarrassing” quotation errors, and that just one out of the three AI-generated papers the corporate selected to submit would’ve met the bar for convention acceptance. Sakana withdrew its ICLR paper earlier than it might be printed within the curiosity of transparency and respect for ICLR conference, the corporate stated.
Alexander Doria, the co-founder of AI startup Pleias, stated that the raft of surreptitious artificial ICLR submissions pointed to the necessity for a “regulated firm/public company” to carry out “high-quality” AI-generated examine evaluations for a worth.
“Evals [should be] achieved by researchers totally compensated for his or her time,” Doria stated in a sequence of posts on X. “Academia just isn’t there to outsource free [AI] evals.”