Text-to-video model

A text-to-video model is a machine learning model that takes a natural language description as input and produces a video relevant to the input text. Recent advancements in generating high-quality, text-conditioned videos have largely been driven by the development of video diffusion models.

Models
There are different models, including open source models. The demo version of CogVideo is an early text-to-video model "of 9.4 billion parameters", with their codes presented on GitHub. Meta Platforms has a partial text-to-video model called "Make-A-Video". Google's Brain has released a research paper introducing Imagen Video, a text-to-video model with 3D U-Net.

In March 2023, a landmark research paper by Alibaba was published, applying many of the principles found in latent image diffusion models to video generation. Services like Kaiber and Reemix have since adopted similar approaches to video generation in their respective products.

Matthias Niessner and Lourdes Agapito at AI company Synthesia work on developing 3D neural rendering techniques that can synthesise realistic video by using 2D and 3D neural representations of shape, appearances, and motion for controllable video synthesis of avatars.

Alternative approaches to text-to-video models exist.