In a recent study published in the International Journal for Educational Integrity , researchers in China compared, for the first time, the accuracy of artificial intelligence (AI)-based content detectors and human reviewers in detecting AI-generated rehabilitation-related articles, both original and paraphrased. They found that among the given tools, Originality.ai detected 100% of AI-generated texts, professorial reviewers accurately identified at least 96% of AI-rephrased articles, and student reviewers identified 76% of AI-rephrased articles, highlighting the effectiveness of AI detectors and experienced reviewers.
Study: The great detectives: humans versus AI detectors in catching large language model-generated medical writing . Image Credit: ImageFlow / Shutterstock ChatGPT (short for "Chat Generative Pretrained Transformer"), a large language model (LLM) chatbot, is widely used in various fields. In medicine and digital health, this AI tool may be used to perform tasks such as generating discharge summaries, aiding diagnosis, and providing health information.
Despite its utility, scientists oppose granting it authorship in academic publishing due to concerns about accountability and reliability. AI-generated content may potentially be misleading, necessitating robust detection methods. Existing AI detectors, like Turnitin and Originality.
ai, show promise but struggle with paraphrased texts and often misclassify human-written articles. Human reviewers also exhibit moder.