У нас вы можете посмотреть бесплатно Practical Guide: Fine-Tuning Qwen3 with LoRA - Ivan Potapov или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Links: Notebook: https://github.com/ivan-digital/llm-a... Article: https://blog.ivan.digital/finetuning-... In this workshop, Ivan Potapov, AI researcher and practitioner, delivers a practical deep dive into fine-tuning the Qwen3 large language model using LoRA (Low-Rank Adaptation). Building on real datasets and experiments, Ivan demonstrates how to optimize model behavior, improve output structure, and maintain performance efficiency all while reducing compute costs. You’ll learn about: Comparing fine-tuning strategies and choosing the right one for your model size Using supervised fine-tuning (SFT) to enforce structured outputs like JSON Optimizing hyperparameters such as batch size, attention, and LoRA rank Applying KL divergence to preserve foundational model capabilities Leveraging soft prompting with quantization for lightweight fine-tuning Managing data collation and tokenization for large-scale training Building strong baselines and benchmarks for reliable evaluation Ideal for machine learning engineers, data scientists, and AI developers working on LLM customization, edge deployment, or applied NLP research. Whether you’re scaling Qwen3 for production or experimenting with small fine-tuning projects, this session equips you with the technical know-how and hands-on workflow to get results faster. TIMECODES: 00:00 Fine-Tuning Workshop Kickoff 03:05 Comparing Fine-Tuning Methods 06:45 Databricks Dolly 15K Dataset Breakdown 10:15 Enforcing JSON with Supervised Fine-Tuning 14:05 Hyperparameter Tuning: Batch Size & Attention 17:45 Model Behavior with System Tokens 22:15 KL Divergence Explained for Regularization 26:40 Soft Prompting with Quantization Tricks 30:15 Few-Shot Optimization via Soft Prompts 34:50 Setting Baselines & Tuning LoRA Rank 39:00 LoRA Fine-Tuning Setup Guide 42:25 Tokenization & Data Collation Essentials 46:30 Fine-Tuning for Edge Inference Use Cases 51:40 Boosting Performance with Regularization 56:05 Why LLMs Are Stateless in Conversations 1:00:35 Starting Small: Benchmarks & Q&A Datasets Connect with Ivan Linkedin - / ivan-sur Website - https://blog.ivan.digital/ Github - https://github.com/ivan-digital Connect with DataTalks.Club: Join the community - https://datatalks.club/slack.html Subscribe to our Google calendar to have all our events in your calendar - https://calendar.google.com/calendar/... Check other upcoming events - https://lu.ma/dtc-events GitHub: https://github.com/DataTalksClub LinkedIn - / datatalks-club Twitter - / datatalksclub Website - https://datatalks.club/ Connect with Alexey Twitter - / al_grigor Linkedin - / agrigorev Check our free online courses: ML Engineering course - http://mlzoomcamp.com Data Engineering course - https://github.com/DataTalksClub/data... MLOps course - https://github.com/DataTalksClub/mlop... LLM course - https://github.com/DataTalksClub/llm-... Open-source LLM course: https://github.com/DataTalksClub/open... AI Dev Tools course: https://github.com/DataTalksClub/ai-d... 👉🏼 Read about all our courses in one place - https://datatalks.club/blog/guide-to-... 👋🏼 Support/inquiries If you want to support our community, use this link - https://github.com/sponsors/alexeygri... If you’re a company, reach us at alexey@datatalks.club #Qwen3 #LoRA #FineTuning #LLM #MachineLearning #AI #AIDevelopment #SupervisedFineTuning #SoftPrompting #HyperparameterTuning #KLdivergence #Databricks #Dolly15K #Tokenization #ModelOptimization #AIWorkshop #DeepLearning #EdgeInference #IvanPotapov #AIEducation