The Timeline for Realistic 4-D: Devi Parikh from Meta on Research Hurdles for Generative AI in Video and Multimodality | No Priors: Artificial Intelligence | Machine Learning | Technology | Startups

🎁Amazon Prime 📖Kindle Unlimited 🎧Audible Plus 🎵Amazon Music Unlimited 🌿iHerb 💰Binance

Podcast

Description

Video dominates modern media consumption, but video creation is still expensive and difficult. AI-generated and edited video is a holy grail of democratized creative expression. This week on No Priors, Sarah Guo and Elad Gil sit down with Devi Parikh. She is a Research Director in Generative AI at Meta and an Associate Professor in the School of Interactive Computing at Georgia Tech. Her work focuses on multimodality and AI for images, audio and video. Recently, she worked on Make a Video 3D, also called MAV3D, which creates animations from text prompts. She is also a talented AI-generated and analog artist herself. Elad, Sarah and Devi talk about what’s exciting in computer vision, what’s blocking researchers from fully immersive Generative 4-D, and AI controllability. No Priors is now on YouTube! Subscribe to the channel on YouTube and like this episode.

Show Links:

Devi Parikh - Google Scholar 

Text-To-4D Dynamic Scene Generation named MAV3D (Make-A-Video3D)

Full Research Paper

Website with examples of image to 4 D generation

Devi’s Substack

Sign up for new podcasts every week. Email feedback to [email protected] Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @DeviParikh

Show Notes: (0:00:06) - Democratizing Creative Expression With AI-Generated Video (0:08:31) - Challenges in Video Generation Research (0:15:57) - Challenges and Implications of Video Processing (0:20:43) - Control and Multi-Modal Inputs in Video (0:25:50) - Audio’s Role in Visual Content (0:39:00) - Don’t Self-Select & Devi’s tips for young researchers

Transcript