AI conference's papers contaminated by AI hallucinations
theregister.co.ukGPTZero, a detector of AI output, has found yet again that scientists are undermining their credibility by relying on unreliable AI assistance.
The New York-based biz has identified 100 hallucinations in more than 51 papers accepted by the Conference on Neural Information Processing Systems (NeurIPS). This finding follows the company's prior discovery of 50 hallucinated citations in papers under review by the International Conference on Learning Representations (ICLR).
GPTZero's senior machine-learning engineer Nazar Shmatko, head of machine learning Alex Adam, and academic writing editor Paul Esau argue in a blog post that the availability of generative AI tools has fueled "a tsunami of AI slop."
"Between 2020 and 2025, submissions to NeurIPS increased more than 220 percent from 9,467 to 21,575," they observe. "In response, organizers have had to recruit ever greater numbers of reviewers, resulting in issues of oversight, expertise alignment, negligence, and even fraud ...
Copyright of this story solely belongs to theregister.co.uk . To see the full text click HERE

