Skip to content

Shelly Palmer - Grok 3: The case for an unfiltered AI model

Think about this: Do unfiltered AI models have a valid place in the AI landscape?
ai1-unsplash
AI produces polished, sanitized models that align with corporate and legal safety standards.

The world isn’t “safe for work,” but most foundational models are. OpenAI, Anthropic, Google and other popular model builders aggressively filter training data to exclude harmful content — adult entertainment, hate speech, extremism and even controversial political perspectives. The result? Polished, sanitized models that align with corporate and legal safety standards.

That’s great — or is it? The real world is messy, complicated, and filled with morally grey areas, which raises the question: Do unfiltered AI models have a valid place in the AI landscape? Read more. -s

P.S. I'm proud to be partnering with the MMA to host and facilitate the CMO AI Transformation Summit (March 18, 2025 | NYC). This half-day invitation-only event is limited to select CMOs and will provide insights into the strategies, technologies, and leadership practices by peer CMOs who are driving successful AI transformations across the world’s best marketing organizations. Request your invitation.

Shelly Palmer is the Professor of Advanced Media in Residence at Syracuse University’s S.I. Newhouse School of Public Communications and CEO of The Palmer Group, a consulting practice that helps Fortune 500 companies with technology, media and marketing. Named LinkedIn’s “Top Voice in Technology,” he covers tech and business for Good Day New York, is a regular commentator on CNN and writes a popular daily business blog. He's a bestselling author, and the creator of the popular, free online course, Generative AI for Execs. Follow @shellypalmer or visit shellypalmer.com

push icon
Be the first to read breaking stories. Enable push notifications on your device. Disable anytime.
No thanks