У нас вы можете посмотреть бесплатно Waver: Wave Your Way to Lifelike Video Generation или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Waver is a *high-performance foundation model* developed by Bytedance that unifies *image and video generation* capabilities within a single framework. It is designed to overcome common challenges in video generation, such as unsatisfactory quality in complex motion scenarios, unclear technical details for high-resolution output, and the need for separate models for different tasks. Waver can directly generate videos lasting 5 to 10 seconds at a 720p native resolution, which are then *upscaled to 1080p* for enhanced clarity. This model simultaneously supports *text-to-video (T2V), image-to-video (I2V), and text-to-image (T2I)* generation. Its core architecture includes a *Hybrid Stream DiT* to improve modality alignment and accelerate training convergence, and a *Cascade Refiner* that functions as a super-resolution module. The development of Waver involved a *comprehensive data curation pipeline* with a manually annotated and trained MLLM-based video quality model for filtering high-quality samples, as well as *detailed training and inference recipes**. These contributions enable Waver to **excel at capturing complex motion**, achieving superior motion amplitude and temporal consistency. The model has demonstrated strong performance, ranking among the **Top 3 on both T2V and I2V leaderboards* at Artificial Analysis and consistently outperforming existing open-source models while matching or surpassing state-of-the-art commercial solutions, particularly in complex motion scenarios. The aim of this technical report is to help the community more efficiently train high-quality video generation models. https://arxiv.org/pdf/2508.15761