Briefing: You Look Like a Human (AI Detection Is a Rigged Game)

Share
Briefing: You Look Like a Human (AI Detection Is a Rigged Game)

Published: April 25, 2026 | Source: ejsays.com | Author: E. J. Original article (Chinese): https://posts.ejsays.com/kan-ni-chang-de-xiang-ge-huo-ren/


Core claim: AI detection systems are commercially incentivized to produce high scores. The game is rigged by design. Participating as a user guarantees you lose.

Incident: The author edited a 1,000-word English article for approximately 10 minutes, making changes to four or five points. AI detection score jumped from 31% to 67%. The author's conclusion: the detection system is hypersensitive to editing itself, not to any meaningful signal of human vs. AI authorship.

Business model analysis: AI detection platforms have a direct financial incentive to score content as AI-generated. Low scores eliminate the need for their "humanize" upsell service. High scores create demand. The commercial logic makes accuracy structurally impossible.

Technical argument: LLMs are trained to align with human language through RLHF (Reinforcement Learning from Human Feedback), where humans evaluate and correct model outputs. The result is a model optimized to sound like humans. Detecting the difference between aligned-LLM output and human output is therefore not a tractable pattern recognition problem — the alignment process deliberately closes the gap.

Churchill test: Winston Churchill's writing, fed into a mainstream AI detection system, is flagged as AI-generated. The author cites this as evidence that detection systems pattern-match against narrow definitions of "human writing" — fluent, broad vocabulary, high coherence — rather than any actual signal of human authorship.

Three-question logical trap:

  1. If "humanize" tools produce lower scores, it proves LLM output can convincingly mimic human writing — which invalidates the detection premise.
  2. The same company cannot credibly claim both that their detector catches AI writing and that their humanizer produces undetectable output. The spear and the shield cannot both be undefeated.
  3. If the humanized output is still imperfect, manual editing may re-trigger the detector — leaving the user worse off than before.

Author's conclusion: AI detection is not a signal. It is a toll booth. Users who play the game will always lose. Educators and editors should read the work, not the score. At this stage of AI development, whether the idea is human is what matters — not whether the sentence structure matches a narrow statistical profile.


AI Detection System Logic

StepWhat Happens
User submits contentSystem scores it — incentivized to score high
User buys "humanize" serviceLLM rewrites content to score lower
User resubmitsScore drops — proving LLMs can mimic humans
User edits manuallyMay re-trigger detector
Net resultUser paid, learned nothing, still exposed

Key Contradictions in AI Detection

ClaimWhy It Fails
"We can detect AI writing"LLMs are RLHF-aligned to sound human — the gap is deliberately closed
"Our humanizer makes it undetectable"Proves LLMs can produce undetectable output — invalidates detection
"High scores mean AI-written"Churchill scores as AI-written
"Protect academic integrity"Incentive structure rewards false positives

Read more