У нас вы можете посмотреть бесплатно 01: Introduction to LLM Engineering, [Session 1 of Full Course, LLM Engineering Cohort 3] или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Session 1 of the LLM Engineering class by AI Makerspace. We open-sourced the entire cohort from our 2024 session. Subscribe to our YouTube channel for more free AI content! 🧑💻 LLM Engineering refers to the evolving set of best practices for *training**, **fine-tuning**, and **aligning* LLMs to optimize their use and function. 🤖 Whether you’re looking at OpenAI’s GPT series, Google Gemini, Anthropic Claude, Mistral, Grok, or any other model provider, the core underlying architectures are extremely similar, from training methods to the data they use. Small Language Models (SLMs) with parameter counts under 7B, like Phi-3 from Microsoft, also have the same architecture patterns. 🏫 This course will provide you with the foundational concepts and code to train, fine-tune, and align LLMs using industry-standard and emerging approaches from the open-source edge heading into 2025. *🤓 Become the expert* in your organization on all things training (pretraining, post-training), fine-tuning (supervised fine-tuning, instruction tuning, chat tuning, etc.), alignment (PPO, DPO, etc.), Small Language Models (SLMs), Model Merging, and more! #llm #learnai #freeclasses #ai #genai #agents 0:00 - Welcome to LLM Engineering by AI Makerspace 2:55 - The impact of the AIM Community 6:07 - Welcome to LLM Engineering! 8:16 - LLM Engineering Course Overview 11:32 - High-level overview of key terms, concepts, and popular tools 23:16 - Discussion: What is LLM Engineering, practically, and why does it matter? 28:53 - Session 1 Course Overview 30:04 - Module 1: The Transformer 35:24 - Module 2: Practical LLM Mechanics 38:42 - The Wiz analyzes the results achieved by prompting the LLM 45:37 - Module 3: Training, Fine-Tuning, and Alignment 51:05 - Module 4: Frontiers 52:10 - Session conclusion 🕳️ Go Deeper *Concepts* 🎤 **Podcast**: [Lex Fridman and Dario Amodei]( • Dario Amodei: Anthropic CEO on Claude... , CEO of Anthropic 📜 **Constitutional AI**: [Reinforcement Learning from AI Feedback](https://arxiv.org/abs/2212.08073) *🔽 Small Language Models (SLMs)* [SuperNova](https://www.arcee.ai/product/supernova) and a [recap of our discussion with the creators](https://ckarchive.com/b/lmuehmh0pplx0...) TinyStories and Textbooks Are All You Need **⚗️ Synthetic Data Generation**: likely will be [leveraged throughout the training stack](https://ckarchive.com/b/5quvh7hven09m...) **🗽 State of AI**: [The State of AI Report, Oct 2024](https://www.stateof.ai/) **📈 Scaling Laws**: [Andrej’s thoughts](https://x.com/karpathy/status/1727731..., [Training Compute-Optimal LLMs](https://arxiv.org/abs/2203.15556) 🪟 **In-Context Learning**: [Language Models are Few-Shot Learners](https://arxiv.org/abs/2005.14165) 👍 **RLHF**: [Reinforcement Learning from Human Feedback](https://huyenchip.com/2023/05/02/rlhf...) *Code* **⚖️ Instruction-Tuned Models**: [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Met...) **💬 Fine-Tuned for GenZ Slang**: [GenZ Dataset](https://github.com/kaspercools/genz-d...) and [GenZAI application](https://huggingface.co/spaces/ai-make...) **🧠 Reasoning Models**: [OpenAI’s o1 preview](https://chatgpt.com/?model=o1-preview)