Yesterday, I was one of the 183 million people who watched AI-generated rabbits bouncing on a backyard trampoline on TikTok. The video, presented as Ring camera footage, was adorable. It was also fake. It took me a few seconds to catch the glitch (one bunny disappears mid-hop), but I wasn’t mad. It was entertaining. Frankly, whether trampoline bunnies are real or fake doesn’t impact how I spend my day.
However, according to a couple of articles that made the rounds yesterday, the majority of viewers thought the video was real. That’s the part worth examining. I’m not linking to any of the articles making that claim because I can’t tell whether the source is credible, the stats are real or whether the stats were hallucinated by the same generative AI platform that made the bunnies.
Fifteen years ago, in a four-part series I wrote with my son Jared called Truthiness in a Connected World, we tried to articulate how belief would start to overtake fact in digital environments. We explored the difference between Truth (objective reality), truth (socially accepted narratives), and truthiness (what feels true, regardless of evidence). Part 1 set the stage. Part 2 explored how technology alters our perception of reality. Part 3 explained how people form truth clusters: groups who share beliefs and reinforce them, independent of whether they’re fact-based.
Those essays weren’t predictions; they were frameworks – and they’ve never been more relevant. What Stephen Colbert brilliantly labelled “truthiness” and “wikiality” has matured into AI slop: a flood of synthetic media, plausible nonsense, and half-true headlines that look and feel real, but often aren’t.
This follows directly from Monday’s post: Is ChatGPT Agent Really Fooling CAPTCHAs? Both Answers Are Terrifying. The pattern is clear: AI doesn’t need to be right. It just needs to be fast, cheap, and generally believable.
Now that anyone can generate content that looks like reality, the burden of truth has shifted. Now you need to verify everything… unless you don’t. Because, like the bunny video, most content you encounter won’t merit investigation. You’ll just watch, smile, and scroll.
That’s the real concern. As we wrote back in 2010:
“The general public doesn’t differentiate between objective truth, agreed-upon truth, and truthiness. Online, all three look exactly the same.” - Truthiness in a Connected World, Part 3
That’s where we live now. Some future states might include paywalls for verified facts or watermarked media with cryptographic provenance. The most likely scenario, though, is that people will continue scrolling and believing what they want to believe. The brand makes the promise. Trust follows the logo, not the facts.
The question isn’t, “Is this real?” It’s: “Does it matter if it’s not?” If the answer is yes, we're going to need much better filters and a healthier dose of skepticism.
As always your thoughts and comments are both welcome and encouraged. -s
P.S. If you search now, you'll find hundreds of versions of bunnies on backyard trampolines videos on TikTok, FB, Insta, X, etc. as people tried to jump (pardon the pun) on the "bunnies on trampolines" bandwagon. People will do anything for a click!
Shelly Palmer is the Professor of Advanced Media in Residence at Syracuse University’s S.I. Newhouse School of Public Communications and CEO of The Palmer Group, a consulting practice that helps Fortune 500 companies with technology, media and marketing. Named LinkedIn’s “Top Voice in Technology,” he covers tech and business for Good Day New York, is a regular commentator on CNN and writes a popular daily business blog. He's a bestselling author, and the creator of the popular, free online course, Generative AI for Execs. Follow @shellypalmer or visit shellypalmer.com.