У нас вы можете посмотреть бесплатно Host a Private AI Server at Home with Proxmox Ollama and OpenWebUI или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Building your own Private AI server is one of the most fun and exciting projects you can try out in your home lab environment. It is also a great way to learn more about AI, LLM models, and GPU passthrough configuration. Self-hosting and configuring your own AI environment is not as difficult as it might sound. Let's take a look at Proxmox, Ollama, and OpenWebUI, as well as Docker Desktop, GPU passthrough a my best practices and tips and tricks. I also show you what projects I am using Private AI servers for. Check it out! GPU passthrough to LXC containers in Proxmox: https://www.virtualizationhowto.com/2... Run Ollama with NVIDIA GPU in Proxmox VMs and LXC containers: https://www.virtualizationhowto.com/2... My 2025 Proxmox Build (affiliate links) Minisforum BD795M – https://geni.us/bd795m RackChoice 2U Micro ATX Compact – https://geni.us/rackchoice2u Cooler Master MWE Gold 850 V2 – https://geni.us/coolermaster850 Crucial 128 GB 5600MT/sec RAM – https://geni.us/crucial128gbramkit Noctua NH-L9i-17xx, Premium Low-Profile CPU Cooler – https://amzn.to/4hAKIwG Crucial 96GB kit of DDR5 SODIMM memory kit – https://geni.us/noctuanhl9i17xx Intel X520-DA2 10 GbE network adapter – https://geni.us/intelx520 Kingston 240 GB drive for boot – https://geni.us/kingston240 Samsung EVO 990 Pro 2TB – https://geni.us/samsung990pro2tb MX-4 Thermal paste – https://geni.us/mx4thermalpaste Case fans: https://geni.us/arcticp8 Check out the VHT forums to get your questions answered: https://www.virtualizationhowto.com/c... Join the coolest home lab community here! https://www.skool.com/homelabexplorer... ★ Subscribe to the channel: / @virtualizationhowto ★ My blog: https://www.virtualizationhowto.com ★ Twitter: / vspinmaster ★ LinkedIn: / brandon-lee-vht ★ Github: https://github.com/brandonleegit ★ Facebook: / 100092747277326 ★ Discord: / discord ★ Pinterest: / brandonleevht Introduction to Private AI in the home lab - 0:00 Why self-host GPT for private AI - 0:54 Prerequisites to running private AI at home - 1:27 Can you run this on something old? 1:55 Docker to play around with AI - 2:12 Ollama overview and what it does - 2:33 OpenWebUI chatGPT style interface to interact with Ollama - 2:44 Using Docker and overview of the setup - 3:06 Docker run or Compose can be used - 3:49 Looking at Docker Desktop - 4:11 Command to spin up ollama in Docker Desktop - 4:36 Spinning the OpenWebUI container from the docker command line - 5:55 Overview of what happens after we spin up both containers - 7:06 Showing how to browse out to the OpenWebUI - 7:40 First steps and adding models to your Ollama instance - 7:47 Talking about better performance with LLMs with GPUs - 8:17 Enabling the NVIDIA container toolkit and when needed - 8:43 Overview of GPU passthrough with Proxmox - 9:00 Options for running GPU passthrough in Proxmox (LXC containers, and Virtual Machines) - 10:00 Steps for GPU passthrough overview - 10:50 IOMMU requirement - 11:05 Checking IOMMU - 11:20 Grepping for the device - 11:56 Claiming the IDs for GPU passthrough in Proxmox - 12:53 Blacklisting the drivers for the host - 13:02 Seeing the RTX video card recognized in the VM running in Proxmox - 13:29 Adding a PCI Device in your virtual machine - 13:51 Passthrough challenges in Proxmox - 15:00 Performance of GPU - 15:41 Quick tips and best practices of using this in the home lab - 16:20 Wrapping up self-hosting private AI in your home - 17:00