Turnitin’s AI Detector: A Brief Overview

The Turnitin AI detector is designed to help educators and institutions discern between student-written work and content generated by AI writing assistants, such as ChatGPT. This tool leverages machine learning algorithms to analyze writing styles, comparing them against known patterns and characteristics of AI-generated text.

When a paper is submitted to Turnitin, the submission is broken into segments of text that are roughly a few hundred words. These segments are run against Turnitin’s AI detection model, which assigns each sentence a score between 0 and 1 to determine whether it was written by a human or by AI.

The AI detector generates an overall prediction of how much text in the submission it believes has been generated by AI. It’s important to note that while Turnitin has confidence in its model, it does not make a determination of misconduct. Instead, it provides data for educators to make an informed decision based on their academic and institutional policies.

AI Writing Detector vs. Similarity Report

Turnitin’s AI detector, specifically designed for identifying AI-generated content, differs significantly from its Similarity tool, which is primarily focused on detecting plagiarism. The core distinction lies in their objectives and the methodologies they employ to achieve these goals.

The Similarity tool scans student submissions against an extensive database of academic papers, journals, internet sources, and previously submitted work to identify matches or overlaps in text. This tool generates a similarity report that highlights the percentage of text in a submission that matches other sources, helping educators detect potential plagiarism. Its primary function is to ensure that students are submitting original work by identifying direct quotations, paraphrased content, and instances where proper citations are missing.

On the other hand, the AI detector analyzes the stylistic and linguistic nuances of the submitted work. The AI detector aims to ascertain whether the bears the hallmarks of AI-generated content. This involves sophisticated algorithms capable of distinguishing between human and AI authorship, which is a different challenge from identifying copied or improperly cited text.

Turnitin’s AI detector cannot definitively identify AI-generated content. Unlike the plagiarism detector that flags text similar to existing sources and provides original source links, the AI detector lacks such original texts for comparison. The AI writing score estimates the likelihood of text being AI-generated. Understanding this score requires contextual knowledge of the student and the assignment, emphasizing the importance of educator insight over mere numerical values.

Considerations

In the interests of fairness and equity, Turnitin recommends that educators consider multiple factors when reviewing student writing:

  • Genre of the assignment. When evaluating a creative or personal narrative, it’s natural to anticipate a lower AI writing score, as such works prioritize the student’s unique perspective and original thought. Conversely, in the context of research-oriented writing, a lower AI writing score could suggest insufficient cited evidence supporting the student’s assertions. On the other hand, a higher score may reflect a robust foundation of evidence bolstering the claims or, in some cases, might hint at potential misconduct, whether intentional or not. Engaging with the student while reviewing their work can provide valuable insights into the underlying reasons and guide the determination of subsequent actions.
  • Length of an assignment. When assessing a concise essay as opposed to an extensive research paper, it’s important to be mindful of the potential for false positives. Essays under 300 words are more prone to being incorrectly flagged. However, while it’s useful to consider guidelines regarding the influence of the AI writing score, the score itself doesn’t dictate its significance. Ultimately, only the educator, who understands the assignment’s requirements and knows the student well, can accurately gauge the true impact of the AI writing indicator.
  • Guidelines for usage of AI writing tools. Be transparent and explicit about whether generative AI writing tools and/or editing tools are permitted or prohibited.
  • Student needs. The most critical element in this decision-making process is the student. It’s essential to ensure that students with unique needs are not penalized for using tools designed to support their specific circumstances. For instance, an English language learner may be allowed to use an AI-powered translation tool, which could result in their work being flagged as AI-generated. Overlooking this could unjustly penalize the student and possibly breach the terms of their approved accommodations.