Greetings from Bangkok. Yesterday, I had a Socratic debate with Dr. Ayesha Khanna, Co-Founder and CEO of Addo AI. The audience picked the questions, and we did our best to find a consensus answer. It was great fun! Today, I'm doing the closing keynote. Then: off to see some sights.
In the news: Michael Smith, a musician from North Carolina, was indicted for a fraud scheme that generated more than $10 million in royalties by using AI-generated songs and bots. Between 2017 and 2024, Smith uploaded hundreds of thousands of AI-produced songs to streaming platforms like Spotify, Apple Music, and YouTube Music. He used more than 1,000 bot accounts to inflate stream counts, disguising his activities with VPNs. At the peak of the scheme, Smith’s bots generated more than 4 billion fake streams, collecting millions in fraudulent royalties before his arrest. He's facing three charges: wire fraud, conspiracy to commit wire fraud, and conspiracy to commit money laundering. Each of these charges carries a maximum penalty of 20 years in prison, meaning that if convicted on all counts, he could face up to 60 years behind bars.
Just to clarify: Smith created the songs with AI, which are not protectable by copyright and therefore not eligible for royalty payments. He created a massive listening farm – bots that listened to billions of streams – and collected more than $10 million in illegal royalties.
To tell you the truth, I'm very impressed with the tech stack he built to do this. Here's the thing: he got caught. Viewability and fraud have been omnipresent for years. There are thousands of bot farms busy visiting websites, watching videos, and cranking up view counts and clicks all over the place. AI tools make building these fraudulent ecosystems much, much easier. Soon, it will be as easy as using ChatGPT. Then what?
As always your thoughts and comments are both welcome and encouraged. Just reply to this email. -s
P.S. Have you started planning for CES© (Las Vegas, January 7-10, 2025)? Our executive briefings and floor tours are the best way to experience the show. Let us help you get the most out of CES. Learn more.
Today's Most Interesting Stories
Understanding OpenAI's Rumored Humanity-Ending Algorithm
By now, I’m sure you’ve seen dozens of headlines like, “OpenAI may have discovered AI so powerful it could end humanity.” While such headlines are sensational, it's important to approach them with a balanced perspective. Humanity is as safe from AI today as it was yesterday. And, it is highly unlikely that (considering all of the other threats to human safety) AI is going to be humanity's undoing any time soon.
US, EU, UK, and others sign legally enforceable AI treaty
New AI model “learns” how to simulate Super Mario Bros. from video footage
Microsoft to detail OneDrive Copilot, mobile app updates, and more during October event
Google expands AI-powered virtual try-on tool to include dresses
OpenAI gives artists access to unreleased tools like Sora for New York gallery exhibit
ABOUT SHELLY PALMER
Shelly Palmer is the Professor of Advanced Media in Residence at Syracuse University’s S.I. Newhouse School of Public Communications and CEO of The Palmer Group, a consulting practice that helps Fortune 500 companies with technology, media and marketing. Named LinkedIn’s “Top Voice in Technology,” he covers tech and business for Good Day New York, is a regular commentator on CNN and writes a popular daily business blog. He's a bestselling author, and the creator of the popular, free online course, Generative AI for Execs. Follow @shellypalmer or visit shellypalmer.com.