This video explores the feasibility of running large language models (LLMs) on various hardware, ranging from a $50 Raspberry Pi to a $50,000 AI workstation. The presenter tests different models and configurations, showcasing the performance and limitations of each setup. The video aims to provide insights into the hardware requirements for running LLMs locally.
241298 5 месяцев назад 15:05This video demonstrates how to set up a local large language model (LLM) called Ollama and connect it to Home Assistant Voice. The process involves installing Ollama on a server, adding the integration to Home Assistant, and choosing a model to use. The video then shows how to use the LLM to answer questions and interact with Home Assistant.
3962 5 дней назад 7:58