The AI Blame Game: How Confirmation Bias Skews Our Judgment of Online Writing
- Dell D.C. Carvalho
- 15 hours ago
- 2 min read
In 2024, a professor at a public university gave students a surprise quiz. The task? Identify which of five short passages were written by AI. Some students circled the blandest one. Others picked the one with the most facts. Few agreed. But here’s the twist: all five were written by the same human professor. He’d simply used five different tones. The exercise wasn’t about catching AI. It was about revealing how people think AI sounds—and how often they're wrong.
This is confirmation bias at work. And it’s shaping how we judge content across the internet.

What Is Confirmation Bias?
Confirmation bias is our tendency to interpret new information in a way that confirms what we already believe. When we expect AI to write a certain way—say, robotic, formulaic, or overly formal—we start “seeing” those traits in any writing that fits the mold.
Writers using clear structure, neutral tone, or even consistent grammar often get accused of being AI. Ironically, human writers who follow good writing practices are sometimes the ones labeled fake. At the same time, sloppy writing is assumed to be “more real,” even when it’s machine-generated.
A 2023 study found that humans misidentify AI writing 38% of the time when no clues are provided. With added bias (e.g., telling readers to “watch out for AI”), accuracy dropped below 50%—worse than chance¹.
What We Think AI Sounds Like
Certain patterns make readers suspicious:
Repetitive sentence structure
Overuse of transitions like “however” or “furthermore”
Lack of personal anecdotes
“Too perfect” grammar
These traits are common in AI writing, but they’re also common in resumes, news articles, and academic reports. People often forget that tone and structure are shaped by context, not just authorship. A government report and a creative blog post will sound different—even if written by the same person.
Once readers decide something “sounds like AI,” they start filtering everything else through that lens. That’s how a well-written LinkedIn post by a freelancer gets dismissed as fake, while a ChatGPT-generated rant gets praised for its "realness."
The Real Risk: Distrusting Good Writing
The danger isn’t just mislabeling. It's distrust.
Writers are now being penalized for clarity, polish, or consistency. Job applicants are asked if their cover letters are “authentic.” Students get flagged for using too many facts. Thoughtful blog posts get dismissed because they “don’t feel human.”
As AI gets better, judging authorship by style becomes unreliable. Instead, we need new strategies:
Check sources, not just vibes
Ask for transparency in process (e.g., “This draft was AI-assisted”)
Focus on intent and accuracy, not just tone
Final Thought
When we rely too much on intuition to sniff out AI, we risk turning confirmation bias into a witch hunt. Not every clear sentence is machine-made. And not every clunky one is a sign of a real person. The next time something “feels AI,” pause and ask: Is it really? Or is that just what I expect?
Commentaires