Researchers have developed a new AI model capable of generating highly realistic and coherent video from simple text descriptions. The system, named 'CineGen', uses a novel diffusion architecture that builds scenes frame-by-frame with consistent characters and environments. Early demonstrations show the model creating short clips of animals in natural settings and simple human actions. While …
Researchers have developed a new AI model capable of generating highly realistic and coherent video from simple text descriptions. The system, named ‘CineGen’, uses a novel diffusion architecture that builds scenes frame-by-frame with consistent characters and environments. Early demonstrations show the model creating short clips of animals in natural settings and simple human actions. While the videos are currently low-resolution and short, the technology represents a significant leap forward in generative AI for dynamic visual media. Experts note the potential applications in film pre-visualization, gaming, and education, but also highlight serious concerns regarding the creation of deepfakes and misinformation. The research team has stated they are not releasing the full model publicly at this time due to these ethical considerations. For the full details and to view the example videos, read the complete article at https://technologyreview.com/2024/03/15/cinegen-ai-video-generation.
Join the Club
Like this story? You’ll love our Bi-Weekly Newsletter



