You ran your citations through ChatGPT, Gemini, or Claude, and now something feels off. Maybe a DOI link isn’t working. Maybe the journal name looks familiar, but you still can’t find the article. Or maybe you’ve heard enough horror stories you just start double-checking everything.
Here’s what catches a lot of students off guard: AI tools generate citations that look completely real. The journal is real. The author may even be real. But the article itself doesn’t actually exist. The good news is that fake citations are easy to spot when you know what to look for. Each of these seven checks takes about 30 seconds.
Fake citations get flagged by professors, raise academic integrity concerns, create stress you really don’t need in the middle of a paper, and can seriously impact your final grade. Most students don’t realize a citation is fabricated until a faculty member tries to find the source and can’t.
Run the seven checks below before you submit and you can catch the issue while there’s still time to fix it.
What it is: Most legitimate journal articles published in the last twenty years have a DOI (Digital Object Identifier) – a permanent link that takes you straight to the publisher's page for that article. A real DOI should connect you to a real page.
Why AI fakes it: Language models are trained to produce plausible-looking text. They generate DOI strings that look right – correct publisher prefix, correct-shaped article identifier – but the specific combination was never registered with anyone. The model isn’t checking a live citation database. It’s generating text that statistically looks correct.
The 30-second check: Copy the DOI, paste it into your address bar after https://doi.org/, and hit enter. If it lands on a publisher page for the cited article, you're good. If you get a 404, a "DOI not found" page, or a totally different article, the citation is fabricated.
Example: An AI-generated paper cites Hartman, J. L., & Chen, R. K. (2021). Evidence-based interventions for reducing medication errors in acute care nursing: A systematic review. Journal of Advanced Nursing, 77(8), 3421–3438. https://doi.org/10.1111/jan.14892. The DOI format is correct for that journal. Paste it into doi.org, though, and it doesn't actually open the page. That’s usually the first sign something’s wrong.
What it is: AI models know thousands of real journal names. What they often don't have is reliable access to every article each journal has actually published. So they pair a real journal with an invented article title that sounds like it fits your topic.
Why AI fakes it: Pairing a fake article with a real-sounding journal makes the citation feel credible at a glance. It's also harder to catch than a fully invented source, because the journal itself checks out.
The 30-second check: Go to the journal's website and search the archive for the exact article title. If nothing comes up, the article probably wasn't published there. Cross-check by searching Google Scholar with the article title in quotes.
Example: Using the same Hartman & Chen citation, search the Journal of Advanced Nursing archive for "Evidence-based interventions for reducing medication errors in acute care nursing." Nothing comes up. The journal is real. The article isn't.
What it is: AI models pull names of real researchers from training data and assign them to fabricated papers. Sometimes the author is a known expert in a totally unrelated field. Sometimes they're a real researcher in your field who never wrote the paper being cited.
Why AI fakes it: Real author names lend credibility. The tool isn't trying to deceive you – it's pattern-matching common author names with plausible topics. The result often looks completely legitimate unless you stop and verify it.
The 30-second check: Look up the author on Google Scholar or their institutional faculty page and see what they actually publish on. If a nursing paper is attributed to someone whose work has only ever been about environmental engineering, the citation is fake. Even if the field is right, an article missing from their actual publication list is still a serious flag.
Example: If a "J. L. Hartman" exists in nursing research but their publication list doesn't include the medication-errors article, that's the flag. Right name and right field isn't the same as right citation.
What it is: AI generates page ranges that look right but often don't reflect how the journal actually paginates. A range like 3421–3438 implies a journal with continuous pagination across a volume and an article that runs about 18 pages.
Why AI fakes it: The model knows what academic citations are supposed to look like. It knows page ranges go there. It doesn't know what the journal's pagination actually looks like in that volume.
The 30-second check: If you can find the article PDF, count the pages. If you can't find it, look at other articles from the same volume and issue. If the journal typically runs 12-page articles and this citation claims a 50-page article in the same issue, it deserves a closer look. Continuous pagination versus issue-based pagination is another tell – some journals restart at page 1 each issue, others don't.
What it is: AI sometimes cites work from years before the topic, technology, or framework being discussed actually existed.
Why AI fakes it: The model isn't checking the timeline. It generates a plausible-sounding year, and "plausible" often just means "recent enough to feel current."
The 30-second check: Sanity-check the year against the topic. A 2008 paper about a 2019 medication is impossible. A 1995 paper analyzing TikTok's role in nursing education is impossible. The less obvious version: a citation from before a key term or framework was even coined.
What it is: AI-generated reference lists tend to cluster around the same publication years, usually the last three to five.
Why AI fakes it: When you ask for sources, the model pulls from the period most heavily represented in its training data, and from years that feel "current" to academic readers. The result is unnatural clustering.
The 30-second check: Look at your reference list as a whole. If five of six sources are from the last two years and they all feel like they were written specifically to support your argument, that's worth a closer look. Real research draws on a range of years, including older foundational work. When every source lines up a little too perfectly, it’s worth slowing down and checking them.
What it is: Real academic reference lists have variation. Some articles have DOIs, some don't (especially older ones). Some have full page ranges, some are online-only with article numbers. Some authors include middle initials, others don't. Real reference lists are slightly messy.
Why AI fakes it: Models are trained on style guides, so they tend to produce formatting that's too consistent. Every citation has a DOI. Every page range is present. Every author has matching initials. That kind of perfect consistency is often the clue.
The 30-second check: Scan your reference list as a whole. If every entry has a perfectly formatted DOI, a tidy page range, and identical author style, ask yourself whether that matches the variation you usually see in real academic sources. Then ask yourself how many of these sources you've actually opened. Sometimes the overall pattern is what gives the problem away. Tools with a rule-based reference engine, like PERRLA, produce more real-world variation because the references come from real sources, not generated text.
If a citation fails any of the checks above, the move is the same:
Yes, often enough that it's worth checking. ChatGPT and other large language models routinely generate citations that look real but reference articles that were never published. It's one of the most well-documented failure modes of generative AI in academic work.
An AI hallucination is when a language model produces content that sounds confident and plausible but is factually wrong or completely made up. With citations, that usually means real-sounding author names, real journal titles, and plausible DOIs combined into a source that does not exist.
Often, yes. Faculty members increasingly check citations when something feels off, and a fake citation is easy to confirm in under a minute using the checks above. AI-detection tools also flag patterns common in machine-generated text. Submitting fabricated citations is a real risk, especially as more faculty start checking sources directly.
Not reliably. Asking a language model to verify its own output tends to generate more hallucinated content. The model may confirm fake citations are real, invent extra "verification" details, and even fabricate quotes from the fake source to back itself up. You still need to verify citations using outside sources.
Tools with rule-based reference engines avoid the hallucination problem because they're not generating citations from a language model in the first place. They pull from verified source data instead of generating citation details from scratch. PERRLA is one example, built specifically for academic writing in Word, Google Docs, and in your browser.
If you'd rather not check every citation by hand, PERRLA's reference engine is rule-based, not AI-generated. Citations come from a verified source database, so they're accurate, consistent, and never hallucinated. Perfect APA without the hallucinations and doubt.
Start a free 7-day trial and see how PERRLA handles the citations on your next paper.
.avif)
.avif)