Skip to Main Content
chat loading...
library logo

Faculty Help: Generative AI Resource Guide: AI Detectors

Repository of info about impact of Generative AI on/in higher education. Focuses primarily on text generators.

AI Writing Detectors

The consensus in education regarding AI text detectors is one of strong skepticism and caution, with widespread agreement that these tools are unreliable, inconsistent, and ethically problematic when used for punishment or disciplinary purposes.

Accuracy and Reliability
Research consistently shows that AI detectors perform poorly in accurately distinguishing between human and AI-generated writing. Across multiple studies, accuracy rates averaged around 40%, with some tools misidentifying all samples. While detectors like Turnitin or Copyleaks sometimes perform better, results vary widely across studies and degrade significantly when tested against newer models like GPT-4 or domain-specific content such as computer code.

Evasion Vulnerabilities
AI detectors are also highly susceptible to adversarial manipulation. Simple techniques such as paraphrasing, adding spelling errors, or altering sentence structure can drop detection accuracy to as low as 12–15%. Because generative models continually improve at mimicking human writing, the “arms race” between detectors and text generators makes reliable detection increasingly impossible.

Ethical and Equity Concerns
Perhaps most concerning is the risk of false positives—human-written work wrongly flagged as AI-generated. In some studies, false accusation rates reached 15–50%, even for top-performing tools. Detectors also display bias against non-native English writers, whose more predictable phrasing can be misclassified as AI output, creating serious equity and inclusion issues. Furthermore, detectors cannot reliably differentiate between minor AI assistance (like proofreading) and full AI generation.

Educational Implications
Experts strongly recommend against punitive use of AI detectors. Instead, institutions should prioritize assessment redesign that fosters authentic learning, integrate detection tools only in non-punitive educational contexts, and rely on human judgment when evaluating student work. Continuing to depend on flawed detection systems risks undermining fairness, trust, and academic integrity.

NOTE: The above text was generated by Google NotebookLM, based off of all studies referenced in this section, then summarized by ChatGPT 5o. 

Listing and linking to these resources does not indicate SFCC Library's endorsement of said resources (Editor's Note: I've actually seriously considered deleting this section altogether due to the controversy surrounding the use of these resources, but...)

  • Please keep in mind that the efficacy of each platform is not consistent. No one platform is 100% foolproof.
  • Sources are listed in alphabetical order
  • Many of these are free but simply require you to set up an account. But yes - some only allow a free trial period. 

What to do if you suspect unsanctioned use of Generative AI

  • Beforehand (“An ounce of prevention…”)
    • Develop a Generative AI use policy for your course syllabus and/or assignments
    • Draw students’ attention to that policy
    • Talk openly with students about Generative AI
    • Collect periodic writing samples from students to familiarize yourself with their writing style and voice
    • Require students to provide links to all sources, and randomly spot-check those links
      (Some Generative AI platforms can now provide real, legitimate links to so-called 'sources'; however, those links often do not match the generated 'source').
  • Tell the student why you believe they may have used Generative AI in a way they were not supposed to 
    • Do you see phrasing in their writing that clearly indicates a Generative AI platform wrote it? 
      • “I’m sorry but as a Large Language Model, I can’t….”
      • “Certainly! I’m happy to write that essay for you!” 
    • Is their writing style or vocabulary unexpectedly different than you’ve ever seen it?
    • Does their writing not address the question or prompt in a way you’d expect?
  • Engage the student in a conversation about their work
    • Are they able to engage and converse with you about their work or do they have trouble recalling key aspects? 
    • Ask the student to discuss both their thought and writing processes. Are they able to do this? 
    • Can they define terms/words that you believe may have been provided by Generative AI?  
    • Document this interaction. 
  • Please consider very carefully before...
    • Using AI detection software. 
    • Assuming that use of words like “delve,” “tapestry,” “landscape” etc automatically means the student used a Generative AI tool. Rather, compare the writing style with various student writing samples. 
  • Further action needed?

AI Humanizers - What are they?

The increasing use of Generative AI by students and faculty efforts to counter it have often been described as an arms race. One of the latest weapons in this race are AI 'humanizer' writing websites. 

What are they?
AI humanizer writing websites are tools designed to make AI-generated text sound more natural, human-like, and less detectable as machine-written. They work by taking content created by an AI (like ChatGPT or similar tools) and rewriting or editing it to:

  • Improve tone and flow
  • Add natural language patterns (e.g., contractions, idioms, variability)
  • Avoid common structures or phrasing that AI detectors flag

Some use rule-based methods (applying specific linguistic tweaks), while others use additional AI models trained to mimic human writing styles. These tools are often used to bypass AI detection tools or improve readability.

At the time of this writing (Spring 2025) some of the more popular AI humanizer websites are AIHumanizer, WriteHuman, Humanize AI and AI Undetect but there are hundreds out there.
 

To address suspected AI humanizer use in student essays:

Detection Strategies

  • Analyze writing patterns: Humanizers may correct grammar but leave overly uniform tone or lack authentic emotional shifts. Compare current work to past submissions for sudden style changes. 
  • Require process documentation: Ask for drafts, outlines, and AI prompts used. Verify consistency between stages. 
  • Check contextual depth: Humanized text often remains superficial or misses assignment-specific details (e.g., personal observations, niche citations).

Conversation Approaches

  • Ask open-ended questions: “Walk me through your research process” or “How did you develop this argument?” Inability to discuss specifics may indicate AI use.
  • Focus on learning: Frame violations as growth opportunities. Discuss time management, citation norms, and the value of original thought.

Policy Adjustments

  • Explicitly ban humanizers in syllabi and define consequences.
  • Assign AI-proof tasks: Incorporate real-world observations, class discussions, or reflective elements.

Tools and Workflow

  • Combine AI detectors (e.g., GPTZero) with plagiarism checkers, as humanizers often paraphrase.
  • Use version history tracking in tools like Google Docs to monitor edits.
chat loading...