У нас вы можете посмотреть бесплатно LTX-2 - 20 seconds Video + Audio | Advanced ComfyUI Workflow | T2V+I2V | GGUF & Safetensors или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
#LTX-2 video with audio | Advanced ComfyUI Workflow | T2V + I2V | GGUF & Safetensors | 20s @ 50FPS | Part 1 In this video, I introduce LTX-2*, a powerful open-source DiT-based audio-video generation model capable of generating synchronized video and audio in a single pipeline — all running locally. 𝗦𝗮𝗺𝗽𝗹𝗲 𝗼𝘂𝘁𝗽𝘂𝘁: • LTX-2 Sample - 1280x720 This is Part 1 of a multi-part series where I showcase an advanced ComfyUI workflow that goes far beyond the default setup. 🔥 What LTX-2 Can Do Generate up to 20 seconds of video at 50 FPS Produce audio and video together in a single model Scale output up to 4K resolution using LTX upscaling models Designed for local execution with open weights 🚀 What This Video Covers An advanced modular ComfyUI workflow for LTX-2 Support for both safetensors and GGUF Text-to-Video (T2V) and Image-to-Video (I2V) in one unified workflow Model split into: Diffusion Model Text Encoders Video VAE Audio VAE Support for: Full Dev model Distilled model Distilled LoRA on Dev model Two-stage sampling: Base generation at half resolution 2× latent upscaling for final output ⚙️ Model Variants Explained Dev Model CFG: 4 Steps: 20 Distilled Model / Distilled LoRA CFG: 1 Steps: 8 🧠 Hardware Test Setup GPU: RTX 3060 (12GB VRAM) Resolution: 720p Video Length: 20 seconds FPS: 25 System RAM: 48GB ⚠️ Important Note About I2V (Low VRAM GPUs) Text-to-Video runs fine on low-VRAM GPUs, but *Image-to-Video (I2V) is extremely VRAM intensive*. If you are using: I2V GPU with ≤ 12GB VRAM You must start ComfyUI with reserved VRAM to avoid out-of-memory errors. ▶️ ComfyUI Command Line for I2V on Low VRAM python main.py --lowvram --reserve-vram 10 `10` means 10 GB of VRAM reserved specifically for latents Recommended for I2V on GPUs with ≤ 12GB VRAM If using shorter video length or *lower resolution*, this value can be reduced accordingly You would need to use this custom node for GGUF support. https://github.com/vantagewithai/Vant... 📂 Installation Clone into your ComfyUI `custom_nodes` directory: cd ComfyUI/custom_nodes git clone https://github.com/vantagewithai/Vant... pip install -r requirements.txt Restart ComfyUI after installation. Model links Models (dev) BF16: https://huggingface.co/vantagewithai/... FP8: https://huggingface.co/vantagewithai/... GGUF: https://huggingface.co/vantagewithai/... Models (distilled) BF16: https://huggingface.co/vantagewithai/... FP8: https://huggingface.co/vantagewithai/... GGUF: https://huggingface.co/vantagewithai/... text_encoders https://huggingface.co/Comfy-Org/ltx-... https://huggingface.co/vantagewithai/... vae https://huggingface.co/vantagewithai/... https://huggingface.co/vantagewithai/... loras https://huggingface.co/Lightricks/LTX... https://huggingface.co/Lightricks/LTX... latent_upscale_models https://huggingface.co/Lightricks/LTX... Workflow download https://huggingface.co/vantagewithai/... Model Storage Location 📂 ComfyUI/ ├── 📂 models/ │ ├── 📂 checkpoints/ │ │ └── ltx-2-19b-audio_vae.safetensors │ ├── 📂 diffusion_models/ │ │ └── ltx-2-19b-dev-model.safetensors │ ├── 📂 text_encoders/ │ │ ├── gemma_3_12B_it.safetensors │ │ └── ltx-2-19b-text_encoder.safetensors │ ├── 📂 vae/ │ │ ├── ltx-2-19b-VAE.safetensors │ ├── 📂 loras/ │ │ ├── ltx-2-19b-lora-camera-control-dolly-left.safetensors │ │ └── ltx-2-19b-distilled-lora-384.safetensors │ └── 📂 latent_upscale_models/ │ └── ltx-2-spatial-upscaler-x2-1.0.safetensors 🔜 Coming in Part 2 Depth map support ControlNet integration Advanced motion and structure control If this helped you: 👍 Like the video 🔔 Subscribe for Part 2 💬 Leave a comment with your setup or questions Thanks for watching!