Greetings from Terminal 2 in Nice. Cannes is a wrap and I'm on my way home. It was a fascinating week defined by a sharp divide: some were excited to discuss adapting to a world being reshaped by AI, while others were busy debating if the shift is real. You know which camp I'm in.
In the news: Midjourney has introduced its first video generation model. Called “V1,” it creates 10-second, 24 fps clips from text, image, or mixed prompts. Early tests show support for dynamic motion, basic scene transitions, and a broad range of camera moves. Aspect ratios include 16:9, 1:1, and 9:16. It runs on a mix of image and video training data.
This is not a photorealistic model. Founder David Holz says the goal is aesthetic control, not realism. Think art direction over live action.
The alpha is private. There’s no timeline for general access or pricing. Holz says they’re prioritizing safety and alignment before scaling.
Midjourney joins OpenAI, Google, and Runway in the text-to-video sprint. Each is approaching the medium with different training data, guardrails, and use cases. So far, only Google's Veo 3 is ready for primetime (assuming you can tell your story by grouping scenes of 8 seconds or less)... but the race has really just begun.
As always, your thoughts and comments are both welcome and encouraged. -s
About Shelly Palmer
Shelly Palmer is the Professor of Advanced Media in Residence at Syracuse University’s S.I. Newhouse School of Public Communications and CEO of The Palmer Group, a consulting practice that helps Fortune 500 companies with technology, media and marketing. Named LinkedIn’s “Top Voice in Technology,” he covers tech and business for Good Day New York, is a regular commentator on CNN and writes a popular daily business blog. He's a bestselling author, and the creator of the popular, free online course, Generative AI for Execs. Follow @shellypalmer or visit shellypalmer.com.