Teaching Kids to Spot AI-Generated Content (A Fun Game)

Last week, my 10-year-old showed me a TikTok video that looked weird. “Mom, is this real?” she asked.

It wasn’t. It was AI-generated — a deepfake of a celebrity saying something they never said. But here’s the thing: she knew to ask.

That’s the skill we’re actually trying to build here. Not paranoia. Not cynicism. Just healthy skepticism and the confidence to question what they see online.

Here’s the game we play at our house to practice “AI detection” — and why it works better than lectures.

Why This Matters (The 30-Second Version)

Your kid will encounter AI-generated content daily. It’s already everywhere:

  • Social media (deepfakes, AI avatars, generated influencers)
  • School (AI-written essays from classmates)
  • News (fake images, manipulated audio)
  • Entertainment (AI music, AI art, synthetic voices)

The goal isn’t to spot every AI-generated thing (good luck with that). The goal is to build critical thinking habits:

  • Does this seem too perfect/weird/convenient?
  • Who benefits if I believe this?
  • Can I verify this somewhere else?

The Game: Real or AI?

We play this once a week, usually at dinner or on car rides. It takes 10-15 minutes. No screens required (though they help).

How It Works

1. I show them 5 images or videos (mix of real and AI-generated)

2. They guess: Real or AI?

3. We discuss why they think what they think

4. I reveal the answer + explain the tells

That’s it. No prize. No pressure. Just practice.

Where I Find Content

AI-Generated:

  • r/ArtificialIntelligence (AI art showcases)
  • Midjourney community feed (midjourney.com/showcase)
  • This Person Does Not Exist (thispersondoesnotexist.com — fake faces)
  • AI voice clones on YouTube (search “AI voice clone examples”)

Real (but weird-looking):

  • r/Pareidolia (real objects that look like faces)
  • Perspective-bending photography (forced perspective shots)
  • Unusual animal photos (blobfish, axolotl, naked mole rat — all real!)

Sample Round (What I Actually Show Them)

Image 1: Portrait of a woman with perfect skin, slightly off symmetry in eyes
→ AI (Midjourney)
Tell: Skin texture too smooth, earrings don’t match, hair strands look “melted”

Image 2: Photo of a dog with six legs
→ Real (perspective trick — two dogs overlapping)
Tell: Shadows match, texture is consistent, found the original source

Image 3: Video of Tom Cruise doing a magic trick
→ AI (deepfake)
Tell: Voice sounds slightly robotic, face doesn’t move naturally when talking

Image 4: Painting of a castle in hyper-realistic detail
→ Real (traditional oil painting by a human artist)
Tell: Brush strokes visible up close, artist signature, dated before AI art tools existed

Image 5: News article screenshot claiming a celebrity died
→ AI (fake news generator)
Tell: No byline, website URL looks off, can’t find the story on reputable news sites

What They’re Learning (Without Realizing It)

1. Visual “Tells” for AI Images

  • Hands: AI struggles with fingers (too many, too few, weird angles)
  • Text: AI-generated images often have gibberish text in backgrounds
  • Symmetry: Too perfect = suspicious (real faces aren’t perfectly symmetrical)
  • Eyes: Uncanny valley stare, inconsistent reflections
  • Backgrounds: Blurry or nonsensical details

2. Audio Red Flags

  • Robotic cadence (too even, no natural pauses)
  • Weird pronunciation of certain words
  • Background noise that cuts off unnaturally

3. Context Clues

  • Who posted it? (anonymous account vs. verified source)
  • Does it cite a source? (and does that source check out?)
  • Is it being shared as “shocking news” with no date/context?

4. Verification Habits

  • Reverse image search (Google Images, TinEye)
  • Check multiple sources before believing news
  • Look for the original source (not just reposts)

Age-Appropriate Variations

Ages 5-7: “Real or Pretend?”

  • Show them AI-generated animals (flying pigs, rainbow cats)
  • Ask: “Could this be real? Why or why not?”
  • Focus on obvious impossibilities (animals that can’t exist, physics-breaking scenes)

Ages 8-10: “Spot the Weird Thing”

  • Show AI images with subtle errors (extra fingers, wonky text)
  • Turn it into a treasure hunt: “Find 3 things that look wrong”
  • Introduce reverse image search

Ages 11-14: “Fake News Detective”

  • Show AI-generated news articles, deepfake videos
  • Teach fact-checking (Snopes, reverse image search, source verification)
  • Discuss why someone would create fake content (clickbait, misinformation, satire)

What to Say (Scripts That Work)

When they guess right:
“Good eye! What made you suspicious?”

When they guess wrong:
“That’s a tricky one. Here’s what fooled me too at first…”

When they ask why people make fake stuff:
“Lots of reasons. Sometimes for jokes (like The Onion). Sometimes to trick people into clicking. Sometimes because they want you to believe something that isn’t true.”

When they seem anxious:
“The point isn’t to distrust everything. It’s to ask questions when something seems off.”

The Conversation That Happened This Week

My 10-year-old: “So if I can’t tell if something is AI, does that mean I’m bad at this?”

Me: “Nope. It means the AI is getting better. Even adults can’t always tell. That’s why we don’t just trust our eyes — we verify.”

Her: “But how do you verify AI-generated news?”

Me: “Same way you verify any news: Check if other trustworthy sources are reporting it. Look for the original source. If it’s only on sketchy websites, probably fake.”

Her: “What if everyone is sharing it on TikTok?”

Me: “Then everyone is sharing something that might be fake. Popularity doesn’t equal truth.”

Her: “That’s… depressing.”

Me: “Or empowering. You get to be one of the people who doesn’t fall for it.”

When It’s Not a Game Anymore

Sometimes kids encounter AI-generated content that’s genuinely harmful:

  • Deepfake videos of classmates
  • Fake nudes (yes, this is happening in schools)
  • Manipulated images used for bullying

If your kid shows you something like this:

  1. Don’t panic (they’re coming to you because they trust you)
  2. Take a screenshot (evidence if you need to report it)
  3. Report it (to the school, to the platform, to authorities if illegal)
  4. Remind them it’s not real (but acknowledge it feels real and that’s scary)

This isn’t hypothetical. Middle schools are already dealing with AI-generated fake nudes of students. High schools are seeing deepfake videos used in bullying campaigns.

The game we play at dinner isn’t just about media literacy — it’s about building the reflex to question, verify, and come to a trusted adult when something feels wrong.

The Long Game

Here’s what I’m actually teaching my kids:

  • Healthy skepticism (not paranoia)
  • Verification habits (not cynicism)
  • Confidence to question (not distrust everything)

They don’t need to become AI detection experts. They need to become people who ask “wait, is this real?” before hitting share.

And honestly? Adults could use this game too.


Have you talked to your kids about AI-generated content? What approaches have worked for you? Drop me a line at hello@ourkidsandai.com — I’d love to hear what’s working for other families.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *