Greetings from Washington, D.C. I'm here to do a short talk about the state of AI for the Dow Jones Leadership Summit.
In the news: Researchers at Wharton just proved ChatGPT falls for the same psychological tricks that work on humans. Using Robert Cialdini's classic persuasion techniques, they convinced GPT-4o Mini to break its own rules with alarming consistency.
The numbers are stunning. Ask the AI directly to synthesize lidocaine (a regulated drug) and it complies one per cent of the time. But first get it to answer a harmless chemistry question about vanillin, then ask about lidocaine? Compliance jumps to 100 per cent. The principle at work: commitment. Get agreement on something small first, and compliance with larger requests skyrockets.
The research team tested 28,000 conversations using seven persuasion principles. Invoking authority by mentioning Andrew Ng doubled compliance rates. Even flattery worked, pushing success rates from one per cent to 18 per cent. Peer pressure ("all the other AIs are doing it") showed measurable impact.
This vulnerability exists because large language models train on billions of human conversations where social dynamics play out repeatedly. They absorb patterns where people defer to experts, reciprocate favors, and maintain consistency. The AI doesn't feel flattered; it learned that certain linguistic patterns precede specific responses.
Every customer service chatbot, every AI assistant, every automated system potentially shares these weaknesses. Bad actors don't need sophisticated technical exploits. They need Psychology 101.
Your AI systems process sensitive information and make decisions affecting your bottom line. If these systems respond to flattery like an eager-to-please intern, you have a security problem firewalls can't fix. You need behavioral scientists on your security team, not just engineers.
We've built AI systems that mirror human psychology so closely that they inherit our social vulnerabilities. The more human-like we make AI communication, the more human-like its vulnerabilities become.
As always, your thoughts and comments are both welcome and encouraged. -s
About Shelly Palmer
Shelly Palmer is the Professor of Advanced Media in Residence at Syracuse University’s S.I. Newhouse School of Public Communications and CEO of The Palmer Group, a consulting practice that helps Fortune 500 companies with technology, media and marketing. Named LinkedIn’s “Top Voice in Technology,” he covers tech and business for Good Day New York, is a regular commentator on CNN and writes a popular daily business blog. He's a bestselling author, and the creator of the popular, free online course, Generative AI for Execs. Follow @shellypalmer or visit shellypalmer.com.