Even though plagiarism detection won’t catch generative AI chatbots anytime soon, you still have options to stop ChatGPT and other generative AI tools from being used in your course assignments and exams.
This article will show you:
- How plagiarism detectors work & why they can’t catch AI like ChatGPT, Claude, and Gemini
- How proctor writing assignments just like exams to prevent the use of AI or control how learners use them during the assignment
Plagiarism detection 101
What is plagiarism detection?
Plagiarism detection, sometimes called content similarity detection, identifies copied or unoriginal text, whether intentional or not, by comparing it to existing sources. It’s mostly used as a way to discourage students from cheating cheating, but it can also help catch unintentional plagiarism, which happens more often than you think.
How does plagiarism detection work?
Plagiarism detection software works by comparing the text you submit against large databases of content to find similar and/or matching text. It determines if the text is copied or paraphrased using some of these methods:
- Text comparison to find exact matches or similarities
- String matching: Looks for exact word-for-word matches.
- Token matching: Matches full sentences/phrases instead of word-for-word.
- Fingerprinting: Finds patterns in words, phrases, structure, etc.
- Writing style analysis evaluates word choice, syntax, vocabulary, etc. to find patterns.
- Similarity scoring uses math formulas to measure similarity by comparing words, how much overlap there is, and how many changes are needed to make them match (helps find paraphrasing).
Can ChatGPT be detected as plagiarism?
ChatGPT can’t be detected as plagiarism in most cases because it generates new text based on existing sources, but it doesn’t copy. It also tailors responses based on the prompt it’s given. However, plagiarism tools could potentially flag AI-generated text if parts of it are too similar to existing text. Overall, though, don’t bank on plagiarism tools catching ChatGPT.
Do you need to cite AI-generated content?
You should cite AI-generated text because it isn’t your original work. Most citation styles even recommend including a note when AI tools are used to edit or translate your writing.
How to cite AI
Citing AI-generated content depends on the specific citation style you’re using. Common citation styles like APA, MLA, and Chicago each published AI citation guidelines that include best practices for specific situations and formatting requirements for references and in-text citations.
Is AI detection the same as plagiarism detection?
AI detection isn’t the same as plagiarism detection. AI detection tools are trained on human-written and AI-generated datasets (huuuuuge datasets, btw) to help identify patterns that might suggest AI-generated text. Plagiarism detection just finds similarities and matches between texts.
- AI detection checks the text for writing patterns and specific characteristics that are typically found in AI-generated writing.
- Plagiarism detection compares the text to existing content to find exact matches or rephrased content from other sources.
Does ChatGPT plagiarize?
ChatGPT does not plagiarize. Like other AI chatbots, ChatGPT was trained with internet resources, but it doesn’t copy content from the internet.
Is using AI plagiarism?
Using AI isn’t plagiarism because, technically speaking, it doesn’t copy content word-for-word. But for all intents and purposes, if a student uses AI to complete work without permission from the instructor, it’s still considered cheating.
If they don’t plagiarize, how do chatbots work?
AI chatbots generate entirely new, custom responses based on the insane amount of resources they’ve been trained on. Although it may seem like they’re copying from those resources, they’re basically just predicting what to say based on the patterns they’ve learned. That’s why they give you slightly different, custom responses even if you ask the same question twice.
These underlying technologies are the reason generative AI chatbots can understand what you’re saying, figure out what you really mean, and respond like a real person:
- Large Language Models (LLMs): These are basically chatbots’ brains; they’re trained on tons of text to help chatbots understand and generate human-like responses.
- Machine Learning (ML): Allows chatbots to learn from examples and data so they can give better, more accurate responses over time.
- Natural Language Processing (NLP): Helps chatbots read and respond to your messages in a natural way.
- Natural Language Understanding (NLU): helps chatbots figure out what you really mean. This goes beyond simple word-for-word interpretations to understand intent, sentiment, and context.
- Natural Language Generation (NLG): A part of NLP that helps chatbots generate human-like responses.
Access the 6-part guide that covers everything about chatbots and how they work, ways to use them in online learning, and ways to prevent cheating.
Here’s a quick analogy of how chatbots work:
That’s a reeeeeally simplified explanation of how generative AI chatbots work. They “learn,” make connections, and write tailored responses based on the prompts they’re given.
BUT it’s worth noting that because ChatGPT learns from other existing content, its responses can sound really similar to it (sometimes a little too similar).
Can online proctoring prevent the use of ChatGPT and other AI tools?
Online proctoring can prevent the use of ChatGPT and other AI by using a combination of AI and live proctors. This hybrid proctoring approach also provides flexibility to allow the use of AI during writing assignments while still monitoring and managing how and when it’s used.
Example: If participants have an idea of the topic they’ll need to write about during the written assignment/exam, they could copy and paste a written response from ChatGPT.
Proctoring software prevents participants from copying and pasting and, ideally, flags the action.
Example: A participant uses ChatGPT on a cell phone that’s out of view of the webcam.
Instead of relying on a live proctor seeing a phone in real time, Honorlock’s cell phone detection toolkit catches unauthorized use by detecting:
- When test takers try to use their phones to search for answers to test questions
- The presence of Apple devices in the test environment using our exclusive AI-enabled Apple Handoff technology
All suspected cell phone use is reviewed and verified by a human proctor.
Example: Participants try to open a different browser tab to use ChatGPT.
BrowserGuardTM prevents participants from opening other browsers and applications. It also allows you to give access to specific sites while still blocking others. For example, participants can access a research study, but can’t still access ChatGPT.
Example: Participants can use voice commands and dictation on their phones to have Alexa or Siri open ChatGPT, ask specific prompts, and read the responses aloud.
Smart Voice Detection listens for specific things like “Hey Siri,” “Alexa,” and “OK Google.” It also records and transcribes what’s said and alerts a live proctor.
Honorlock records the participants’ desktops during the written activity and/or exam and requires them to disconnect secondary monitors. Instructors can watch time-stamped exam recordings and review in-depth reports.
Example: A participant uses ChatGPT on a cell phone that’s out of view of the webcam.
With Honorlock, instructors have the option to require participants to complete a roomscan that checks for unauthorized resources that could be used to access ChatGPT and even other people.
Our ID verification process is quick and simple. It captures a picture of the test-taker along with their photo ID in about 60 seconds.
Have you ever found your test content leaked on sites like Quizlet, Chegg, or Reddit? Honorlock can help. Search and Destroy™ technology scours the internet for your leaked test content and gives you the ability to send takedown requests with one click.